A Controversial Study: AI-Generated Images Cause a Stir in Scientific Community
In a shocking turn of events, a peer-reviewed scientific journal recently retracted a study that featured nonsensical AI-generated images, including a rather absurd depiction of a gigantic rat penis. The paper, authored by three scientists in China, edited by a researcher in India, and reviewed by individuals from the United States and India, was published in the open access journal Frontiers in Cell Development and Biology earlier this week. However, it quickly gained attention on social media due to the bizarre and nonsensical nature of the AI-generated figures.
The controversial figure that caught everyone’s attention showcased a rat with an outrageously large dissected penis and testicles, accompanied by garbled labels such as “iollotte sserotgomar cell” and “testtomcels.” The authors admitted to using the generative AI tool Midjourney to create these images, which only added to the astonishment surrounding the publication.
Frontiers, the journal responsible for publishing the study, initially added a notice stating that the paper had been corrected and that a new version would be released soon. However, they later retracted the entire study, acknowledging that it did not meet their standards of editorial and scientific rigor. The journal issued an apology to the scientific community, expressing their regret for the oversight and thanking readers for bringing the issue to their attention.
Upon further investigation, it was revealed that one of the reviewers had raised valid concerns about the AI-generated figures and had requested revisions from the authors. Unfortunately, these concerns were ignored, leading to the publication of the flawed study. Frontiers is currently conducting an internal investigation to determine how their processes failed to address the lack of author compliance with reviewer requirements.
The paper underwent review by two individuals, one based in India and the other in the United States. Motherboard reached out to the U.S.-based reviewer, who clarified that they had evaluated the study solely based on its scientific merits. The decision to publish the AI-generated images ultimately rested with Frontiers, as long as the authors disclosed their use of Midjourney. However, it is crucial that these images are accurate, which was not the case in this instance.
This embarrassing incident highlights the broader concerns surrounding generative AI and its impact on academia. Science integrity consultant Elisabeth Bik expressed her disappointment in a blog post, stating that this incident serves as a sad example of how scientific journals, editors, and peer reviewers can be naive or complicit in accepting and publishing AI-generated content of questionable quality.
The incident serves as a reminder that while AI technology has made significant advancements in various fields, it is essential to exercise caution and ensure its responsible use. The scientific community must remain vigilant in upholding rigorous standards to maintain the integrity of research and prevent such incidents from recurring in the future.