EnagoBy: Enago

Integrity Alarms Sound as AI Text Creeps Into One in Every Three Academic Papers! – Enago

Artificial intelligence is now deeply woven into the fabric of research writing and that’s both revolutionary and risky. A 2024 Oxford University Press survey shows that 76% of researchers already use AI tools in their academic work. Meanwhile, a large-scale analysis of 15,000 oncology abstracts from the ASCO Annual Meetings found that AI-detectable content in 2023 was more than double that of 2021.
This rapid infiltration of algorithmically generated writing is forcing publishers to confront a troubling question: When does AI use become plagiarism and who is accountable?

What is AI Plagiarism?

Traditional plagiarism involves copying someone else’s ideas or text without credit. AI plagiarism is murkier and more dangerous. When researchers use generative AI tools like ChatGPT to produce sections of manuscripts, the text isn’t technically “stolen,” but it’s not fully original either.

AI models synthesize learned data patterns to generate new text, meaning the output can contain borrowed phrasing, fabricated citations, or unverified claims. This blurring of authorship and accountability has led experts to warn that AI-generated plagiarism isn’t always detectable but it’s very real.

Why is Detecting AI-written Text so Challenging?

AI text detectors use machine learning models to identify patterns typical of synthetic language. However, studies have found these tools to be inconsistent and error-prone, with false positives as high as 14%. The preprint study reported that academic reviewers could correctly identify just 68% of ChatGPT-written content meaning nearly one in three AI-written texts slipped through unnoticed. As newer models better mimic human tone, detection will only get harder.

That’s why AI detection should be viewed as a screening aid, not a proof of authorship, and must always be supplemented by expert human oversight.

What about AI policies? Aren’t Publishers Already Regulating This?

Not consistently. Current policies across major publishers are a patchwork.
While the Committee on Publication Ethics (COPE) and ICMJE have declared that AI tools cannot be listed as authors or assume responsibility for generated content, guidelines on permitted use remain vague.

This fragmentation leads to confusion, especially since some journals allow AI in text editing and peer review, while others prohibit it entirely. No wonder a 72% of researchers say they’re unaware of institutional AI policies or that none exist.

Check out our Enago’s Responsible AI Movement Page to stay updated with publisher policies.

Should AI be Banned from Research Writing Altogether?

No, but its use must be visible, verifiable, and responsible. Banning AI outright would only push its use into the shadows. Instead, journals should educate authors through:

  • Training sessions on ethical AI use and fact-checking.
  • Examples of acceptable versus unethical AI assistance.
  • Practical AI-use disclosure templates to guide transparency.

AI can augment research efficiency but without oversight, it can erode trust faster than any other technological shift in academic history.

What’s at Stake for Research Credibility?

The issue is how humans use or misuse AI. Publishers now report sharp increases in AI-assisted submissions, while 68% of educators rely on AI detection tools to monitor academic honesty. If scholarly communication doesn’t adopt unified standards soon, we risk a future where originality and authorship lose all meaning.

The path forward lies in collective action:

  • Shared taxonomies for AI disclosure requirements
  • Interoperable detection and verification systems
  • Ethical frameworks uniting journals, funders, and institutions

The decisions made today will determine whether AI becomes a tool that strengthens research or a technology that undermines it.