The Jist: 2 April to 14 April
Get the gist of recent news stories, written by journalists, that cover topics related to scholarly communication.
Hello fellow journalologists,
I’ve been playing catch up after taking some time off over Easter; this email provides a summary of how news outlets have been covering scholarly communication over the past few weeks.
I’m in the process of writing the next annual report summary, which will be sent very soon (in the meantime, you can read the Elsevier analysis if you missed it).
Massive budget cuts for US science proposed again by Trump administration
The proposal would also prohibit the spending of “Federal funds for expensive subscriptions to academic journals and prohibitively high publishing costs unless required by Federal statute or approved in advance by a Federal agency”. The proposal does not define ‘expensive’ or ‘prohibitively high’ or specify which journals would be affected. Many journals “charge the Government to both publish and to access the same research study”, the proposal says, adding that there are many “low-cost outlets” for publishing federally funded research.
JB: This is just a budget proposal, but could have significant ramifications for publishers and author choice. We’re still waiting to hear about the NIH’s new public access policy, which was expected to have been published by now.
‘Fight AI-driven resmearch with points for peer review’
David Comerford, professor of economics at the University of Stirling, told Research Professional News that incentivising rigorous peer review would be a first line of defence against the rising tide of “resmearch”, where special interest groups or firms use AI to trawl large, public datasets and produce findings that seem to support their product or stance, even if there are other findings that contradict them.
JB: “Resmearch” is a neologism that was new to me.
Scientists invented a fake disease. AI told people it was real
The condition doesn’t appear in the standard medical literature — because it doesn’t exist. It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024… The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.
JB: Is this a neomorbus?
A journal named a sleuth in a correction. The sleuth says that was ‘ethical editorial malpractice’
As the publishing community debates the merits of naming sleuths in retraction or correction notices, one journal did so without the sleuth’s permission — by publishing an email from the authors naming her as the correction notice. The sleuth calls it “ethical editorial malpractice.” The publisher says it was an “administrative error.” After Retraction Watch reached out for comment, the journal removed the text of the email from the correction notice.
Dutch universities set out vision for ‘new publication culture’
Academia needs a “new publication culture” to protect research integrity, the association of Dutch universities (UNL) has concluded, outlining its vision of the right way forward. In a position paper published this month, UNL says research integrity is “under increasing pressure from metric-driven ‘publish or perish’ incentives, predatory journals, paper mills, artificial intelligence-generated or fabricated content and other questionable research practices”.
Canadian panel seeks to add more teeth to research oversight
A Canadian panel is proposing several changes to its guidelines for responsible conduct of research, including a provision that effectively removes any statute of limitations on investigations into potential misconduct. The proposed revisions, from the Canadian Panel on Responsible Conduct of Research (PRCR), are up for public comment until April 17 and have not been made official.
Should academic misconduct be catalogued? Proposed US database sparks debate
For decades, academic institutions have struggled with how to prevent researchers who have committed misconduct from securing jobs at new universities while hiding the bad behaviour. A proposal published today in the journal Science offers a solution, at least in the United States: creating a national database of people found guilty of data fabrication, workplace harassment and more, that would be accessed by research institutions before making new hires. But scientists who spoke to Nature are divided over whether this centralized, confidential list would solve the problem or generate new ones.
JB: You can read the proposal in Science here: More transparency needed on misconduct.
Hallucinated citations are polluting the scientific literature. What can be done?
As a rough estimate, if the rate of 65 publications with at least one invalid reference out of some 4,000 publications analysed holds across the academic literature, it would suggest that more than 110,000 of the 7 million or so scholarly publications from 2025 contain invalid references. Nick Morley, Grounded AI’s co-founder and chief product officer, says that the types of citation problem seen in 2025 are different from those found by his team before the proliferation of LLMs. This fact, he says, points to the use of AI as a leading culprit.
Judge tosses lawsuit over controversial Paxil ‘Study 329’
In a March 24 decision, Judge Robert Okun granted Elsevier’s motion to dismiss for lack of standing. Murgatroyd can’t move forward with the suit because he failed to establish “or even plausibly” allege the journal article is a consumer good or service under the CPPA, according to Okun’s ruling. The CPPA defines a consumer good or service as anything someone would purchase or receive and normally use for personal, household or family purposes.
AIs can ‘memorize’ data they shouldn’t. Can they be forced to forget?
Sometimes, LLMs spit out word-for-word copies of what they ingested, potentially violating copyright or exposing sensitive information such as credit card numbers and addresses… Later this month, however, researchers will present a potentially valuable new tool for studying memorization at a major AI conference in Brazil. Named Hubble—because its creators hope that, like the Hubble Space Telescope, it can help clarify the unknown—it is the first open-source tool designed specifically to study the problem.
Internet Archive ‘collateral damage’ in AI news battle
To prevent what many publishers see as theft of their copyrighted material by potential competitors, some of them have blocked AI developers from crawling their websites and copying stories from them. But many such publishers have grown concerned in recent months that AI developers are using the [Internet] archive as a kind of back door to the publishers’ content, and so they’ve also started to block its access to their material or curtailed its ability to distribute that material.
JB: You may also be interested in: 100+ journalists applaud the Internet Archive’s role In preserving the public record.
Human scientists trounce the best AI agents on complex tasks
The proportion of publications in any given natural-sciences field that mention AI ranges from 6% to 9%, according to the Artificial Intelligence Index Report 2026, released today by the Stanford Institute for Human-Centered AI at Stanford University in California. “Scientists have really embraced this AI era,” says computer scientist Yolanda Gil at the University of Southern California in Los Angeles, who led this year’s index report.
JB: You can read the report here: The 2026 AI Index Report.
Have you published a disruptive paper? New machine-learning tool helps you check
Scientists in the US have unveiled a new machine-learning tool that, they claim, can identify disruptive scientific breakthroughs. They say their method, which assesses how much a paper reshapes its field, is better than other techniques at spotting such disruptions even if they are simultaneously discovered by independent research groups.
Can journals that pay peer reviewers succeed?
The model is straightforward – the journal charges authors an Article Processing Charge (APC) of £1,950, paid after acceptance, typically covered by funders, institutions or grant money. That revenue funds compensation for both editors and peer reviewers, who receive $100 (£75) per review. The journal has a rejection rate of roughly 60 per cent. Since its launch, the platform has published 50 papers and attracted more than 500 registered reviewers, without paid advertising. Kunst says although the reaction in the academic community has mostly been positive, establishing the journal has been an “uphill battle”.
JB: I covered the launch of Advances.in back in issue 14 of this newsletter. Their tagline is “reinventing academic publishing”. 50 papers published over a 4-year period suggests that the process is going rather slowly.
We shouldn’t assume that the journal’s low output is due to the paid peer review model. Correlation does not equal causation. Launching a new journal independently was always going to be difficult, regardless of whether peer reviewers were paid or not.
I’m looking forward to reading the update on Biology Open’s (Company of Biologists) Fast & Fair paid peer review pilot. The graphs on this page suggest that paying a select group of peer reviewers can help to publish papers quickly.
Offering scientists cash to spot errors in published papers doesn’t work
A project that offers researchers a cash bounty for finding mistakes in published scientific papers has run into trouble: It can’t find enough reviewers to do the work. Now, organizers of the Estimating the Reliability and Robustness of Research (ERROR) project are planning to throw in an additional incentive, by publishing the reviews in a new peer-reviewed journal… The project planned to carry out 100 in-depth critiques in 4 years, but only nine have been completed so far, with eight more in the works. Candidate reviewers identified by ERROR often decline requests immediately, agree to do the work but don’t follow up, or ghost the project organizers after a few emails, Elson says.
China dominates the discovery of new chemicals and reactions
China now discovers more than 40% of new chemicals and reactions reported in scientific literature, with the country’s contributions growing exponentially in recent decades, according to a new report. The researchers behind the work attribute this to China’s investment in its chemical sector, which has enabled the country to overtake the US as the dominant leader in chemical discovery. The report also challenges the idea that China’s progress in the chemical sciences is due to its collaborations with US scientists.
BMJ retracts most of a special issue for ‘compromised’ peer review and ‘improbable device use’
BMJ’s Journal of Medical Genetics has retracted the bulk of a seven-year-old special issue for an “irreparably compromised” review process and “improbable device use.” Of the eight papers in the 2019 special issue, seven were retracted, including an editorial that “almost exclusively” referred to the other now-retracted papers, according to a statement from the journal. According to the retraction notice published today, the journal’s investigation found the guest editor for the issue selected the peer reviewers, the majority of whom were affiliated with Nanjing University in China. The guest editor is not named in the issue. The publisher’s investigation also found evidence of compromised peer review in almost all articles, the notice states.
JB: There are two lessons here. First, journalists often mistake the name of the flagship journal (BMJ) with the publisher (i.e. BMJ Group). The BMJ did not retract this paper; another journal in the BMJ Group did. Second, publishers should make clear who guest edited a special issue. The guest editors should take public responsibility for what they publish. As I’ve said before, I don’t have a problem with special issues per se. I do have concerns about the guest editor model.
What Julia Angwin’s Case Reveals About AI, Reputation, and the Right of Publicity
In the complaint, Angwin takes aim at “Grammarly’s misappropriation of the names and identities of hundreds of journalists, authors, writers, and editors.” In a nutshell, Grammarly offered a service that provided writing advice, identifying and associating that advice with the names of high profile writers, including Angwin, Stephen King, Neil deGrasse Tyson, Casey Newton, and many others, despite no relationship between these writers and Grammarly. Most, if not all of the writers were initially completely unaware of the existence of the feature.
JB: This is fascinating. Imagine an AI tool that could edit in the style of Annette Flanagin or write like a Nobel prize winner of your choice.
And finally…
I’ve been asking myself whether editorial and publishing leaders will follow Mark Zuckerberg’s lead:
If you are one of Meta’s almost 79,000 employees and cannot get hold of the boss, do not worry. The owner of Facebook and Instagram is reportedly working on an AI version of Mark Zuckerberg who can answer all your queries. The AI clone of Zuckerberg, Meta’s founder and chief executive, is being trained on his mannerisms and tone as well as his public statements and thoughts on company strategy. The rationale behind the project, according to the Financial Times, is that employees could feel more connected to one of the most powerful people in Silicon Valley.
The possibilities are endless. For example, authors could ask a virtual Editor-in-Chief why their paper was rejected. This is a solution that scales and negates the need for a time consuming appeals process. What could go wrong? AI is all about improving efficiency, right?
Until next time,
James
P.S. Please hit the share button if you think your colleagues would enjoy reading The Jist.

