Tech

Study Featuring AI-Generated Giant Rat Penis Retracted, Journal Apologizes

Study Featuring AI-Generated Giant Rat Penis Retracted, Journal Apologizes

A peer-reviewed scientific journal that this week published a study containing nonsensical AI-generated images including a gigantic rat penis has retracted the article and apologized.

The paper was authored by three scientists in China, edited by a researcher in India, reviewed by two people from the U.S. and India, and published in the open access journal Frontiers in Cell Development and Biology on Monday. Despite undergoing multiple checks, the paper was published with AI-generated figures that went viral on social media because of their absurdity. One figure featured a rat with a massive dissected dick and balls and garbled labels such as “iollotte sserotgomar cell” and “testtomcels.” The authors said they used the generative AI tool Midjourney to create the images. 

Videos by VICE

On Thursday afternoon, Frontiers added a notice saying that the paper had been corrected and a new version would be published soon. The journal later updated the notice to say that it was retracting the study entirely because “the article does not meet [Frontiers’] standards of editorial and scientific rigor.”

Reached for comment, a spokesperson for Frontiers directed Motherboard to a statement posted to the journal’s web page on Thursday apologizing to the scientific community and explaining that, in fact, a reviewer of the paper had raised concerns about the AI-generated images that were ignored. 

“Our investigation revealed that one of the reviewers raised valid concerns about the figures and requested author revisions,” Frontiers’ statement reads. “The authors failed to respond to these requests. We are investigating how our processes failed to act on the lack of author compliance with the reviewers’ requirements. We sincerely apologize to the scientific community for this mistake and thank our readers who quickly brought this to our attention.”

The paper had two reviewers, one in India and one based in the U.S. Motherboard contacted the U.S.-based reviewer who said that they evaluated the study based solely on its scientific merits and that it was up to Frontiers whether or not to publish the AI-generated images since the authors disclosed that they used Midjourney. Frontiers’ policies allow the use of generative AI as long as it is disclosed but, crucially, the images must also be accurate. 

The embarrassing incident is an example of how the issues surrounding generative AI more broadly have seeped into academia, in ways that are sometimes concerning to scientists. Science integrity consultant Elisabeth Bik wrote on her personal blog that it was “a sad example of how scientific journals, editors, and peer reviewers can be naive—or possibly even in the loop—in terms of accepting and publishing AI-generated crap.”