AI vs Academia! Scholars Using ChatGPT to Write Papers
-
Last month, a paper published in the journal Physica Scripta sparked controversy after computer scientist and integrity investigator Guillaume Cabanac noticed the inadvertent inclusion of the ChatGPT query phrase 'Regenerate Response' in the article, seemingly by accident.
The authors of the paper have since admitted to using the chatbot to draft the article, becoming the latest example of generative AI's impact on academia and raising ethical concerns. Kim Eggleton, Head of Peer Review and Research Integrity at IOP Publishing, stated: 'This violates our ethics policy.'
Since 2015, Cabanac has been dedicated to uncovering published papers that fail to disclose the use of AI technology, back when AI was still a novelty. As computers have become capable of producing increasingly realistic, human-like work, the battle has grown more challenging. However, this has only strengthened Cabanac's resolve, as he has helped expose hundreds of AI-generated manuscripts.
Some authors carefully conceal their tracks, leaving no obvious clues, but fortunately, detectives like Cabanac can still find many telltale signs. He recently exposed another paper published in the journal Resources Policy, which contained several glaring traces. The journal stated it was 'aware of the issue' and is investigating the incident.
Additionally, AI models often confuse facts and may be too inept to accurately reproduce the mathematical and technical terminology involved in scientific papers, as seen in the nonsensical equations included in the Resources Policy study.
ChatGPT may also fabricate false claims out of thin air, a phenomenon perhaps too generously described as 'hallucinations.' A Danish professor recently discovered that a preprint paper was partially AI-generated because it cited non-existent papers under his name.
Given the rigor of the peer-review process, it is astonishing that AI-generated counterfeit papers can slip through.