The Jist: 17 March to 1 April
Get the gist of recent news stories, written by journalists, that cover topics related to scholarly communication.
Hello fellow journalologists,
Earlier this week I wrote an essay that dissected the RELX (Elsevier) annual report. Such documents are somewhat dull affairs, but they contain nuggets of information that can’t be found elsewhere.
The essay was the first in a series of posts analysing the largest publishers’ performance in 2025. The goal is to help scholarly communication professionals and researchers better understand the academic publishing landscape.
If you haven’t seen that essay yet, you can read it here:
Next up is Springer Nature, which released its annual report a few days ago. I haven’t finished writing that analysis yet, so I thought I would send you the latest instalment of The Jist before the Easter break instead. It should help you to see how news outlets have covered scholarly communication over the past few weeks.
News headlines
Across the social sciences, half of research doesn’t replicate
A sweeping project involving hundreds of researchers in several dozen countries showed that across the social sciences, the findings of roughly half of all papers cannot be replicated independently, and there’s no reliable way to tell in advance which ones will falter. Called Systematizing Confidence in Open Research and Evidence (SCORE), the effort investigated more than 100 papers published in dozens of leading journals in business, economics, education, political science, psychology, and sociology. The replication success rate—49% for the 164 papers evaluated, reported today in Nature—is consistent with findings from previous studies in individual fields such as psychology, suggesting the problem is pervasive in the social sciences.
JB: You can read the accompanying News & Views article here, a Comment with one of the lead authors here, an Editorial here, and the three research papers here, here and here. The N&V article concludes:
Journal editors, presidents of professional associations, department chairs and other social-science leaders should read these results as showing that every social science faces replication problems. Those who study scientific paradigm shifts might warn that the gatekeepers of the current system often have the most to lose from changing it. But I am more optimistic. Tighter standards are not merely restrictions; they also create fresh opportunities for innovation, including big-team science, ‘adversarial’ collaborations and healthier norms of independent verification. If social-science institutions help to improve standards for empirical evidence, they can ensure that scholars’ professional incentives align with their scientific values.
In the Comment article Brian Nosek says:
A core lesson from SCORE is that there is no single measure of trustworthiness, and there never will be. ‘Published or not’, for example, is a crude way to assess the quality of science. We take peer review as sacrosanct when everyone knows it’s not. Peer review is highly tentative, occurs at a single point in time, is ad hoc, permanent — and, in most cases, opaque.
Amen to that.
Chinese institutions account for over half of research paper retractions
According to the study, there were 29,867 Chinese affiliations listed on these retractions – more than 91% of which don’t list international collaborators. Researchers in China produced 16.5% of all research output during that time period [1997 to 2026], the study found, despite the country’s institutions being listed on more than 52% of retracted papers in the sample. Following China, institutions based in India, the US and Saudi Arabia feature on 7.25%, 5.72% and 2.83% of retractions, respectively.
Researchers from China dominate IOPP outstanding reviewer awards
This year’s recipients were selected from about 35,000 reviewers who submitted peer-review reports to IOP Publishing journals in 2025. Journal editors evaluated nominees based on the volume, timeliness and quality of their reviews. A total of 1621 individuals have been honoured with a 2025 award. China makes up 30% of awardees followed by 16% from the US and just over 6% from India. Some 10% of this year’s award winners are also based in lower middle-income countries or territories.
Over 90 Percent of Scientists Admit to Questionable Research Behaviors
Recently, Entradas and her colleagues surveyed more than 1,500 researchers in Portuguese universities to gauge their perception and participation in such dubious practices. The findings, published in PLoS One, revealed that 91 percent of the researchers have participated in at least one practice that lies in the grey zone of scientific integrity, indicating that widespread QRPs [questionable research practices] may pose a threat to ethical research.
Women Face Longer Peer Review Delays
A recent large‑scale study published in PLOS Biology confirms gender bias in academia by showing that scientific papers led by women as first or corresponding authors experience longer delays in the peer‑review process than those led by their male colleagues. Researchers in the Biology Department, University of Nevada, Reno, Nevada, analyzed more than 36 million academic articles in over 36,000 biomedical and life science journals. Using articles indexed in PubMed, the analysis found that average review time is between 7.4% and 14.6% longer for papers authored by women than for those submitted by men. This gender gap is widespread and affects most disciplines, regardless of female representation in each field.
Hallucinated citations are polluting the scientific literature. What can be done?
As a rough estimate, if the rate of 65 publications with at least one invalid reference out of some 4,000 publications analysed holds across the academic literature, it would suggest that more than 110,000 of the 7 million or so scholarly publications from 2025 contain invalid references… The true number of hallucinated references is almost certainly higher, says Weber-Boer, because the analysis focused on big publishers, which have more resources for checking citations systematically than do smaller publishers.
Karger publishing house lays off 76 of 114 employees
76 employees of the long-established publishing house Karger are losing their jobs – just three months after its takeover by Oxford University Press. Those affected are criticizing the cold manner of the dismissals. The feared mass layoff at the renowned scientific publisher Karger has been decided: 76 employees, two thirds of the workforce, lose their jobs in Basel. This is clear from the letter on the consultation process of S. Karger AG to the Office for Economic Affairs and Labour (AWA), which is available to this editorial staff. The terminations were issued at the end of March, with individual notice periods ranging from one to six months.
Commission’s EU-wide open access idea prompts concerns
The European Commission’s announcement that it might move to impose mandatory open access to the results of publicly funded research across the EU has received a mixed reception from research and publishing leaders. Ekaterina Zaharieva, the EU research commissioner, told the European Parliament this month that the Commission was “looking into making publicly funded research open access by default” under the planned European Research Area Act. Expected this year, the act will attempt to force the EU member states to take steps to improve their research systems.
Cern confirms it will run expanded fee-free publishing platform
In its own announcement, the [European] Commission said ORE will have a budget of €17 million for 2026-31, with the EU providing €10m. Since it launched five years ago, ORE has published more than 1,200 articles. Cern said the platform is “expected to support a growing number of research outputs each year”. Last month, experts told RPN they thought uptake of the increased eligibility will depend on how the newly participating national organisations engage with their communities.
‘Coordination needed’ for innovation in scholarly publishing
Bringing about innovation in scholarly publishing requires coordination between stakeholders, assessment reform and public investment, according to a report from Knowledge Exchange, a group of European research organisations. The report, published on 18 March, considers six such innovations: preregistration of research protocols to ensure robust methodology; publication of successive versions of papers with gradual improvements; publication of preprints; open peer review; post-publication curation to speed up dissemination; and modular publication of not only papers but also components such as methods and data.
Wikipedia bans AI-generated content in its online encyclopedia
Wikipedia has banned the use of artificial intelligence in the generation or rewriting of content for its voluminous online encyclopedia. In a recent policy change, Wikipedia said that the use of large language models (or LLMs) “often violates” its core principles and will not be allowed. The English language version of Wikipedia has more than 7.1m articles. The use of AI has been a contentious issue among Wikipedia’s community of volunteer editors but a vote among the site’s editors supported the ban, according to 404 Media. There are two exceptions to the new ban: AI can still be used for translations, and to make minor copy edits.
Fresh AI data mining plan ‘could hand research to big tech’
Researchers and publishers have expressed scepticism about a proposed exception to UK copyright law that would permit data mining by artificial intelligence developers purely for science and research. Earlier this month, the [UK] government released a report on AI and copyright following a consultation on proposed changes to UK law. Proposals to carve out a broad copyright exception for data mining had sparked a backlash, particularly from the creative sector, and the government has dropped its preferred option for an exception to go ahead with rights holders able to opt out.
ERC sets out firm line on use of AI in peer review
The European Research Council has published new guidelines on how its expert reviewers of research proposals can use artificial intelligence, setting out stricter rules than some other sectoral organisations. Reviewers must not delegate their evaluation to AI and must respect the confidentiality of the proposal, according to the guidelines published by the ERC Scientific Council on 24 March.
How to build an AI scientist: first peer-reviewed paper spills the secrets
AI Scientist is a collection of ‘agents’ built on top of existing large language models (LLMs), such as GPT-4o or Claude Sonnet 4. It prompts those LLMs to search the literature on a given topic, generate hypotheses and design a set of possible research directions. Next, AI Scientist writes code, executes it and measures its efficiency. Finally, it writes a paper describing the results. The authors of the paper describing the tool also created an ‘automated reviewer’ to evaluate the quality of its output. The results “approach borderline acceptability for machine learning conference workshops”, the authors write.
Major conference catches illicit AI use — and rejects hundreds of papers
A major artificial-intelligence conference has rejected 497 papers — roughly 2% of submissions — whose authors violated AI-use policies in their peer reviews of other articles submitted to the meeting. The International Conference on Machine Learning (ICML), to be held in Seoul in July, has a reciprocal review policy, meaning that, bar certain exceptions, every paper must have an author who reviews other conference papers. Authors whose reviews violated the conference’s large language model (LLM)-use policy had their papers rejected.
The Lancet retracts half-century-old unsigned commentary on talc for undisclosed industry ties
In their reply, the journal editors said publishing unsigned commentaries “used to be standard practice.” A representative from The Lancet told us the journal would only consider publishing an unsigned letter now “in rare circumstances where there are concerns about author safety.” In those circumstances, the editors are still aware of author names and affiliations, the representative said.
And finally…
Well, that was quite a lot wasn’t it? Time for a nap, perhaps?
His 2025 book, The Brain At Rest, proposes that regular bouts of doing nothing can change your life. Finding time to let your mind wander and take a daily 30-minute nap can make you more creative and efficient, he argues.
Until next time,
James
P.S. The winner of the best April fool’s joke goes to De Grill, the new brand for De Gruyter Brill. The logo was cooked to perfection 🍔!


