AI may be weakening the quality of published research


Wednesday, 28 May, 2025


AI may be weakening the quality of published research

Artificial intelligence could be affecting the scientific rigour of new research, according to a study led by the University of Surrey and published in PLOS Biology. The research team has now called for a range of measures to reduce the flood of “low-quality” and “science fiction” papers, including stronger peer review processes and the use of statistical reviewers for complex datasets.

The researchers reviewed papers proposing an association between a predictor and a health condition using an American government dataset called the National Health and Nutrition Examination Survey (NHANES) — a large, publicly available dataset used by researchers around the world to study links between health conditions, lifestyle and clinical outcomes. The team found that between 2014 and 2021, just four NHANES association-based studies were published each year — but this rose to 33 studies in 2022, 82 in 2023 and 190 in 2024.

Many of the post-2021 papers were found to have used a superficial and oversimplified approach to analysis — often focusing on single variables while ignoring more realistic, multi-factor explanations of the links between health conditions and potential causes. Furthermore, some papers cherrypicked narrow data subsets without justification, raising concerns about poor research practice, including data dredging or changing research questions after seeing the results.

“While AI has the clear potential to help the scientific community make breakthroughs that benefit society, our study has found that it is also part of a perfect storm that could be damaging the foundations of scientific rigour,” said study co-author Dr Matt Spick, from the University of Surrey.

“We’ve seen a surge in papers that look scientific but don’t hold up under scrutiny — this is ‘science fiction’ using national health datasets to masquerade as science fact. The use of these easily accessible datasets via APIs, combined with large language models, is overwhelming some journals and peer reviewers, reducing their ability to assess more meaningful research — and ultimately weakening the quality of science overall.”

To help tackle this issue, the team has laid out a number of practical steps for journals, researchers and data providers. They recommend that researchers use the full datasets available to them unless there’s a clear and well-explained reason to do otherwise, and that they are transparent about which parts of the data were used, over what time periods, and for which groups.

For journals, the authors suggest strengthening peer review by involving reviewers with statistical expertise and making greater use of early desk rejection to reduce the number of formulaic or low-value papers entering the system. Finally, they propose that data providers assign unique application numbers or IDs to track how open datasets are used — a system already in place for some UK health data platforms.

“We’re not trying to block access to data or stop people using AI in their research — we’re asking for some common sense checks,” said lead author Tulsi Suchak, a postgraduate researcher at the University of Surrey. “This includes things like being open about how data is used, making sure reviewers with the right expertise are involved, and flagging when a study only looks at one piece of the puzzle. These changes don’t need to be complex, but they could help journals spot low-quality work earlier and protect the integrity of scientific publishing.”

Co-author Anietie E Aliu, a postgraduate student at the University of Surrey, concluded, “We believe that in the AI era, scientific publishing needs better guardrails. Our suggestions are simple things that could help stop weak or misleading studies from slipping through, without blocking the benefits of AI and open data. These tools are here to stay, so we need to act now to protect trust in research.”

Image credit: iStock.com/Who_I_am

Related Articles

Sample management software supports drug discovery sector

Compounds Australia has selected Titian Software's Mosaic Sample Management platform to...

How AI-enabled embedded modules are advancing medtech

AI has been a longstanding focus in medical technology, predating its adoption in other...

Veterinary LIMS improves laboratory efficiency

The North Dakota State University Veterinary Diagnostic Laboratory has significantly benefited...


  • All content Copyright © 2025 Westwick-Farrow Pty Ltd