Journal forced to unpublish paper after authors are caught using ChatGPT to write it

The journal, Physica Scripta, said the paper did not meet its 'ethical policies'

A scientific journal was forced to retract a paper it published last month after it was discovered the authors used the artificial intelligence application ChatGPT to write it.

The paper, published Aug. 9 in the journal Physica Scripta, was an attempt to uncover new solutions to a complicated math equation, but included the phrase "Regenerate response" on the third page — something one eagle-eyed reader recognized was the phrase of a button on ChatGPT, according to a report from Nature.


The authors of the paper have since acknowledged they used ChatGPT to help write the manuscript, something that wasn't caught during two months of peer review after the paper was submitted in May. The revelation led the U.K.-based publisher to retract the paper because the authors did not disclose their use of the AI app when they submitted it.


chatgpt on a laptop

In this photo illustration, the ChatGPT logo is seen displayed on a laptop screen. (Stanislav Kogiku/SOPA Images/LightRocket via Getty Images)

"This is a breach of our ethical policies," Kim Eggleton, who is in charge of peer review and research integrity at IOP publishing, said in a statement, according to Nature. 

The apparent copy and paste error was discovered by computer scientist and integrity investigator Guillaume Cabanac, who since 2015 has made it a personal mission to uncover papers that are not transparent about their use of AI.

"He gets frustrated about fake papers," said Cyril Labbé, a fellow computer scientist who works with Cabanac to uncover the papers, according to a report from Futurism.

chatgpt artificial intelligence

The ChatGPT app is seen surrounded by other apps on a smartphone. (OLIVIER MORIN/AFP via Getty Images)


Cabanac was also behind the recent discovery of a similar situation with a paper published in Resources Policy, which he found included "nonsensical equations," according to Futurism.

While the peer review process for publishing papers is supposed to be rigorous, the volume of research being published leads to some things falling through the cracks. David Bimler, a researcher who also hunts for fake papers, said many reviewers do not have the time to spot sometimes subtle hints that AI was used in a paper.

OpenAI logo

The OpenAI logo displayed on a phone screen and ChatGPT on AppStore displayed on a phone screen are seen in this illustration. (Jakub Porzycki/NurPhoto via Getty Images)

"The whole science ecosystem is publish or perish," Bimler said, according to Futurist. "The number of gatekeepers can't keep up."

Physica Scripta did not immediately respond to a Fox News request for comment

No comments:

Powered by Blogger.