Hello fellow journalologists,
The hottest topic of the moment is publishing integrity in a world being changed (for good and bad) by AI. This email follows a different format to normal. I’ve pulled together the key news stories and announcements that were published over the past month on this theme. I’ve excluded opinion pieces, otherwise this email would be much, much longer.
The title and text that follow are extracts from the sources. None of the text is my own; my role this week is curator, not analyst. I haven’t included the source or author names to make it easier for you to parse.
If you enjoy the insight I normally provide, don’t panic. It will return in future emails. I wanted to experiment with a different format in today’s newsletter. Let me know what you think.
The sheer volume of articles listed here tells its own story about the challenges we face as a community. Thankfully, there are some good news stories included too.
Take a deep breath. There’s a lot to get through.
Peer review manipulation and paper mills
Europe’s largest paper mill? 1,500 research articles linked to Ukrainian network. An investigation has identified more than 1,500 research articles produced by a network of Ukrainian companies that could be one of Europe’s largest paper mills — businesses that produce fake or low-quality research papers and sell authorships.
Frontiers’ Research Integrity team uncovers peer review manipulation network. Frontiers Research Integrity Auditing team has uncovered a network of authors and editors who conducted peer review with undisclosed conflicts of interest and who have engaged in citation manipulation. The unethical actions of this network have been confirmed in 122 articles published in Frontiers, across 5 journals, and has led to their retraction.
Sage journal retracts nearly 50 papers for signs of paper mill activity. Sage has retracted four dozen papers from one of its journals for suspected paper mill activity. The publisher started an investigation into the European Journal of Inflammation “after we noticed signs of papermill activity in one of the articles,” Laura West, a corporate communications and public affairs manager at Sage, told Retraction Watch.
Digital Science investigation shows millions of taxpayers’ money has been awarded to researchers associated with fictitious network. Researchers associated with a fictitious research network and funding source have collectively netted millions of dollars of taxpayers’ money for current studies from the United States, Japan, Ireland, and other nations.
Embattled journal Cureus halts peer reviewer suggestions. The mega-journal Cureus is eliminating author suggestions for peer reviewers, a prompt that is standard practice at some journals when submitting a manuscript. According to an email sent August 25 to current and past peer reviewers, the move is “due to the potential conflict of interest” that comes from authors suggesting reviewers who may be mentors and colleagues.
Publishers urged to act on bad faith peer reviewers. A science integrity expert who has identified a growing phenomenon of fraudulent peer reviews has urged publishers to take more action to tackle the problem. Maria Ángeles Oviedo-García, professor of marketing at the University of Seville in Spain, has conducted extensive research into an emerging trend of ‘review mills’, where reviewers submit generic review reports for journal papers.
Retraction-prone editors identified at megajournal PLoS ONE. Nearly one-third of all retracted papers at PLoS ONE can be traced back to just 45 researchers who served as editors at the journal, an analysis of its publication records has found.
AI content is tainting preprints: how moderators are fighting back. PsyArXiv is just one of the many preprint servers — and journals — that are grappling with suspicious submissions. Some papers bear the fingerprints of paper mills, which are services that produce scientific papers on demand. Others show evidence of content written by AI systems, such as fake references, which can be a sign of an AI ‘hallucination’.
India to penalize universities with too many retractions. India’s national university ranking will start penalizing institutions if a sizeable number of papers published by their researchers are retracted — a first for an institutional ranking system. The move is an attempt by the government to address the country’s growing number of retractions due to misconduct.
Artificial intelligence
GAIDeT (Generative AI Delegation Taxonomy): A taxonomy for humans to delegate tasks to generative artificial intelligence in scientific research and publishing. This study introduces the Generative AI Delegation Taxonomy (GAIDeT), informed by existing contributor role taxonomies, peer-reviewed literature, and an iterative consensus-building approach. It categorizes GAI’s contributions at macro and corresponding micro levels, specifying the degree of human oversight required.
One-fifth of computer science papers may include AI content. A massive, cross-disciplinary look at how often scientists turn to artificial intelligence (AI) to write their manuscripts has found steady increases since 2022, when OpenAI’s text-generating chatbot ChatGPT burst onto the scene. In some fields, the use of such generative AI has become almost routine, with up to 22% of computer science papers showing signs of input from the large language models (LLMs) that underlie the computer programs.
ChatGPT tends to ignore retractions on scientific papers. The large language model–based chatbot ChatGPT fails to highlight the validity concerns with scientific papers that have been retracted or have been the subject of other editorial notices, according to a new study. The analysis, published by Learned Publishing on Aug. 4, examines whether GPT 4o-mini recognizes the problems with 217 scholarly studies that have been either retracted or highlighted for validity concerns by the Retraction Watch Database.
What counts as plagiarism? AI-generated papers pose new risks. The AI Scientist is an example of fully automated research in computer science. The tool uses a large language model (LLM) to generate ideas, writes and runs the code by itself, and then writes up the results as a research paper — clearly marked as AI-generated. It’s the start of an effort to have AI systems make their own research discoveries, says the team behind it.
AI tool labels more than 1000 journals for ‘questionable,’ possibly shady practices. A study of 15,000 open-access journals has used artificial intelligence (AI) to spot telltale signs of “questionable” journals, a genre researchers fear is corrupting the scientific record by prioritizing profits over scientific integrity. The analysis, the most comprehensive use so far of AI to identify potentially problematic journals, flagged more than 1000 titles, about 7% of the sample.
Springer Nature retracts book with fake citations. Springer Nature has officially retracted a book on machine learning following coverage by Retraction Watch. A reader sent us a tip about this book; we’d love your help identifying more.
Policies and guidelines
New COPE retraction guidelines address paper mills, third parties, and more. The organization has also released a new, separate guidance for expressions of concern. Both documents reiterate the as-soon-as-possible timeframe for notices and give more specific details on what information should be included in each type of notice.
Updates to PLOS retrospective health database editorial policy. We recently updated the standards against which we evaluate research using publicly available health and social science databases. When conducted rigorously, these studies are important for understanding prevalence and generating hypotheses for future projects. However, the datasets can also be misused for research that lacks a legitimate research question and does not make a contribution to the literature.
Strengthening publishing integrity: industry collaboration and new guidelines for guest-edited collections. At Frontiers, Research Topics are our model for these article collections. Defined and led by expert researchers, they unite global communities around a common theme and ensure the outcomes are openly available. This approach not only supports scientific progress but also ensures that new knowledge can be rapidly translated into real-world change.
US funders
RFK Jr demanded a vaccine study be retracted — the journal said no. Annals of Internal Medicine says it stands by the study and has no plans to retract it. Christine Laine, editor in chief for the journal, wrote in a comment on the study’s web page on 11 August that “retraction is warranted only when serious errors invalidate findings or there is documented scientific misconduct, neither of which occurred here”.
NIH Publishes Plan to Drive Gold Standard Science. Gold standard science isn’t just what we strive for, it is embedded in everything we do, from the research we support to the policies and programs we create. By ensuring our scientific findings are objective, credible, and accessible to the public, NIH is well positioned to continue to lead the U.S. in transforming discovery into improved health.
Top medical journal editors defend their standards, independence. The editors-in-chief of two of the world’s highest-impact general medical journals are weighing in to defend the time-tested, independent process that they and their colleagues use to “vet, challenge, and advance science.” Kirsten Bibbins-Domingo, PhD, MD, MAS, editor in chief of JAMA® and the JAMA Network, joined with her counterpart at The New England Journal of Medicine, Eric J. Rubin, MD, PhD, to explain how the longstanding system of independent editorial review helps ensure scientific rigor and enables evidence-based advancements in health care.
Tools and services
New signal detects tortured phrases in manuscript submissions. Signals now scans journal submissions for tortured phrases, helping editors and research integrity teams quickly spot these issues early in the publishing process. These unusual and often nonsensical phrases are used in place of standard academic terms — a couple of examples include “man-made consciousness” instead of “artificial intelligence”, and “bosom peril” instead of “breast cancer.” Tortured phrases can be a clear indicator of publishing misconduct, often used to evade plagiarism detectors.
Emerald Publishing to safeguard research integrity with Dimensions Author Check. Digital Science is pleased to announce that Emerald Publishing has adopted Dimensions Author Check as part of Emerald’s ongoing commitment to research integrity. Dimensions Author Check offers publishers a fast and reliable way to incorporate research integrity checks into their work, helping to support responsible and ethical publishing.
Wiley Achieves Milestone with 1,000 Scholarly Journals Now Operating on Research Exchange Platform. Automated screening tools are fully integrated into the publishing workflow, helping maintain research integrity standards while reducing manual review time. The platform conducts 25 comprehensive checks at the initial screening, completing the process in under 10 minutes. Any potential concerns are automatically flagged for further review. This screening stage helps editors by filtering out papers with major scope or integrity issues.
KGL Wins ISMTE 2025 People’s Choice Award for Most Innovative Idea with Smart Review™ Editorial Integrity Platform. The recognition celebrates Smart Review’s transformative impact on peer review workflows by automating critical checks, integrating cutting-edge research integrity tools, and streamlining editorial processes to ensure accuracy, quality, and efficiency at scale.
GenAI detection that actually works. Bottom line: if someone wants to do a survey of genAI usage in the published literature, I think there’s a great opportunity to do that with Pangram’s tools. As a method to detect misconduct, it’s unlikely that genAI detection will be a smoking gun. But I think it will be very useful.
Signals Announces Innovative Research Integrity Badge and Key Partnerships to Empower Researchers. In a recent ORCID Research Integrity poll, 67% of researchers admitted to skipping an article due to trust concerns. This is why we are thrilled to announce the launch of the Signals Badge. Researchers can now prioritize high-quality literature to use in their own work and avoid problematic articles with research integrity issues through this useful tool. The Badge is designed to bring transparency to the evaluation of research integrity in scholarly publishing and serves as a clear visual indicator of Signals’ comprehensive assessment of a publication.
Global Campus and Signals Partner to Enhance Research Integrity in Peer Review. Global Campus and Signals are pleased to announce a new partnership to integrate the Signals Badge into Global Campus. This integration allows users to quickly evaluate the credibility of researchers’ prior work, helping publishers select high-quality, trustworthy reviewers.
Imagetwin and Clear Skies Announce Partnership to Strengthen Research Integrity. Imagetwin and Clear Skies are proud to announce a new partnership that brings Imagetwin’s advanced figure analysis technology into Oversight, Clear Skies’ award-winning research integrity platform. With this integration, users of Oversight will be able to access Imagetwin’s image analysis directly within their workflow. This marks a decisive step forward in ensuring research standards and providing institutions, publishers, and integrity officers with the tools they need to detect and prevent misconduct.
Imagetwin Powers Image Integrity Checks in Integra’s EditorialPilot. We’re excited to announce that Imagetwin has been integrated into Integra’s EditorialPilot, an all-in-one AI-powered manuscript screening platform. This partnership makes it easier than ever for publishers to check image integrity at scale, alongside the 40+ other automated checks EditorialPilot already offers.
Enago partners with the Royal Society of Chemistry to provide bespoke AI technology for manuscript checking. Enago has entered an agreement with the Royal Society of Chemistry (RSC) to provide bespoke manuscript screening technology for incoming journal submissions. This is a significant step from the RSC to support authors with checking journal-specific requirements and preparing their manuscript, with the aim of making the submission process quicker and improving author experience.
Q2 2025 Citing Retracted Research: Dissemination of retracted and fake Science continues at good pace. A fresh look at articles published in the second quarter of 2025 continues to shed light on a critical challenge in academic publishing: the persistent citation of retracted research. Even when retractions are known and documented prior to publication, they still find their way into new scholarly work.
Silverchair Announces New Research Integrity Integrations via ScholarOne Relay. Silverchair has announced new research integrity integrations via partnerships with Cactus Communications, Clear Skies, Signals, and STM Integrity Hub, following the August 26th release of the ScholarOne Relay API. These collaborations leverage the Relay API to embed advanced research integrity checks seamlessly into editorial workflows, empowering publishers to efficiently identify and investigate potential research integrity issues.
Aries Systems and Signals Partner to Strengthen the Efficiency and Accuracy of Research Integrity Checks. To streamline this process and safeguard against these threats, Aries Systems and Research Signals Limited have partnered to integrate Signals Manuscript Checks, a research integrity evaluation tool, with Editorial Manager® (EM), the leading manuscript submission and peer review tracking system.
CACTUS partners with CSIRO Publishing to offer editorial support to authors. Cactus Communications (CACTUS), a leading technology company specializing in AI and expert solutions for the scholarly publishing ecosystem, and CSIRO Publishing, Australia’s leading science publisher of peer-reviewed journals, books and news, today announced a new partnership to support CSIRO Publishing authors with access to expert language and writing support services.
And finally...
I promised no opinion pieces, but since it’s Peer Review Week it seems churlish not to include an extract from a recent JAMA editorial: Artificial Intelligence in Peer Review. Here’s the bottom line:
Our hope is that automating some aspects of peer review, at first, will help to relieve the need to complete rote tasks, allowing scarcer human expertise to focus on aspects such as impact and significance, novelty, and clinical relevance. We endeavor to improve both the quality and efficiency of the peer review process, all while keeping our hands on the wheel and our eyes on the road. We believe it will be critical to maintain a human in the loop even as we seek to incorporate the strengths of AI-based review in our editorial process. In so doing, human editors will maintain full oversight, accountability, and responsibility for scientific rigor, standards, and editorial decisions.
Until next time,
James
P.S. If I missed your story or announcement, I apologise. It was hard to keep up! Please let me know about any omissions so that I can add your website to the ones I monitor and do a better job in the future.