By: EnagoFabricated Foundations of Modern Research — 55% of AI References Are Fake!

The pace of generative artificial intelligence in research is fueling an unprecedented surge in rejections and retractions that is shaking the foundations of scholarly publishing. Once reserved for minor infractions, retractions now target articles in elite journals and entire research groups.
The problem is not hypothetical, biomedical journals saw an estimated 13% of 2024 abstracts involve some degree of AI-generated text. Editors report rejecting an ever-growing number of submissions for suspected AI use, from fake references to altered data. This silent crisis is now infecting the entire scientific record jeopardizing researcher careers, institutional reputations, and billions in public funding with every undetected, AI-generated manuscript that passes as genuine scholarship.
Why is AI’s Fluency Both a Strength and a Danger?
AI language models excel at producing grammatically perfect, stylistically polished text. For many researchers—especially those writing in a second language—this is a major advantage. However, this fluency can be deceptive. Well-written, confident text can hide factual inaccuracies, subtle shifts in meaning, or misleading claims. The danger lies in the false sense of security such polished output creates.
Errors from AI aren’t always obvious. A slightly inaccurate technical term, a misplaced modifier that changes intended meaning, or a confident claim presented without solid evidence can all pass unnoticed. For example, claiming a drug “prevents” a disease instead of correctly stating it “reduces the risk of” could result in a serious misrepresentation in scientific literature.
What are “Hallucinations” and Why Should Researchers Fear Them?
Hallucinations occur when AI tools confidently present fabricated information as if it were factual, non-existent references, outdated data, or misinterpreted concepts. Because these outputs are polished, even experienced researchers may miss them. A peer-reviewed study in Scientific Reports found that 55% of citations generated by GPT-3.5 and 18% from GPT-4 were entirely fabricated, with many genuine citations containing substantive errors.
Can Small AI-induced Errors Really Damage Credibility?
Absolutely. The consequences of unverified AI-generated content are becoming increasingly visible. Journals and preprint platforms are beginning to flag, reject, or retract papers that contain errors introduced by AI tools. These aren’t cases of fraud or data manipulation—they’re the unintended results of misplaced trust.
The “vegetative electron microscopy” incident illustrates this. An AI mistranslation turned the legitimate phrase “electron microscopy of vegetative structures” into nonsense, leading to its appearance in almost two dozen published papers. Though the underlying science was sound, the mistake caused questions about the authors’ expertise and some work faced rejection. Even minor terminology errors can undermine reputations in academic publishing.
Why Can’t AI Understand Science the Way Humans do?
Current AI models don’t comprehend meaning, context, or scientific reasoning. They predict language patterns rather than truly understanding content. They cannot distinguish between proven facts and speculative claims unless explicitly directed.
Decisions or subtle limitations in research demand domain expertise, critical thinking, and contextual understanding, skills only humans can provide. AI smooths language but cannot reliably detect logical inconsistencies, factual errors, or context mismatches. Without human review, these subtle but damaging flaws can pass directly into publication, where they may result in rejection, retraction, or loss of professional credibility.
What is the Right Way to Use AI in Research Writing?
Treat AI as an assistant, not an authority. Use it to enhance efficiency and polish language but always subject every output to meticulous human review. Researchers must develop AI literacy, and human judgment must remain the final safeguard to ensure every claim, citation, and conclusion is accurate, deliberate, and trustworthy.
Many universities, journals, and funding bodies now require disclosure of AI use in manuscripts. Policies mandate that all AI-assisted work be reviewed and approved by a qualified human expert before submission. You can find a consolidated list to compare major publisher policies on our publisher guidelines page. These are not bureaucratic hurdles; they are essential safeguards to preserve the accuracy, trustworthiness, and credibility of scientific communication.
その他の記事
Load more