Journalology #140: Meatspace workers
Hello fellow journalologists,
I’ve decided to split this newsletter and start sending two emails a week.
The Sunday email will be ‘free to read’ by everyone and will largely focus on news stories written by journalists, which are likely to be of interest to a broad audience.
Paid subscribers will also receive a longer email midweek that contains a deep dive into more specialist publishing stories.
I’m making this change primarily to make the writing process more manageable. Thursdays and Fridays are especially popular days for announcements; I’ve been struggling to get everything together by Sunday without spending a day writing at the weekend.
I know that some Journalologists enjoy the long read on a Sunday and I’m sorry to make a change to that routine. Hopefully you will understand.
News headlines
China to punish universities that fail to sanction research misconduct
China’s science ministry will crack down on universities that fail to investigate or sanction researchers who are involved in serious research misconduct. The move is part of a renewed push to get academics and their institutions to take scientific integrity more seriously. The nation’s Ministry of Science and Technology (MOST) said in a notification on its website that institutions should focus on investigating papers that are retracted in international science journals as a result of misconduct. The results of those investigations will be publicized to enhance deterrence. Institutions will face serious penalties if they conceal or tolerate wrongdoing by their researchers, the note states, although it does not reveal what those penalties might be.
JB: Ideally, research integrity issues should be stopped at source; punishing misconduct after it occurred is the second best option. Unfortunately, many universities focus on reputational damage control, rather than putting effort into ensuring that their faculty’s output meets basic quality standards.
How AI slop is causing a crisis in computer science
Computer science is the first field to face the deluge of slop — because research happens in silico and is done by researchers with AI expertise. But with the rise of AI, similar challenges are likely to arise in other fields, including wet-lab-based disciplines, says Lee, “although they may manifest in somewhat different ways”. Speaking to reporters at the launch of Prism last month, Kevin Weil, vice-president of science at OpenAI in San Francisco, California, compared the problem of weeding out low-quality AI-made content with that of spam filtering for e-mails. Working out how to filter out low-quality AI-assisted science is “a real important problem, but we also want to make sure that we are accelerating the top end and helping great scientists do more”, he said.
JB: This article pulls together various stories related to computer science and AI-generated articles that have broken in the past month (most, if not all, I’ve covered in this newsletter).
If this is old news to you, perhaps this story from Nature will grab your interest: AI agents are hiring human ‘meatspace workers’ — including some scientists.
The idea is simple, as the website’s homepage reads: “robots need your body”. Human users can create profiles to advertise their skills for tasks that an AI tool can’t accomplish on its own — go to meetings, conduct experiments, or play instruments, for example — along with how much they expect to be paid. People — or ‘meatspace workers’ as the site calls them — can then apply to jobs posted by AI agents or wait to be contacted by one. The website shows that more than 450,000 people have offered their services on the site.
As journal’s retraction count nears 170, it enhances vetting
In response to the findings, ASTM has implemented a series of safeguards, including enhanced vetting and oversight of guest editors, tighter controls for peer reviewers, expanded use of technology-assisted screening tools to detect irregular review patterns and ongoing monitoring of editorial workflows, according to a Jan. 30 statement from the publisher.
JB: This is an update on a story that I covered in issue 138. ASTM = American Society For Testing And Materials, which self publishes a handful of journals.
The CDC hepatitis B study is unethical and must never be published
The Committee on Publication Ethics (COPE), an international organization that establishes best practices for scholarly journals, has endorsed specific ethical standards for studies that involve vulnerable groups. Among these standards is this statement in the Declaration of Helsinki: “Reports of research not in accordance with this Declaration should not be accepted for publication.” The current controversy about the U.S. Centers for Disease Control and Prevention (CDC) funding for a proposed study of hepatitis B vaccines in Guinea-Bissau must serve as a reminder of this core requirement of publication ethics.
JB: I’m including an extract from this essay to raise awareness of this important issue. On Friday the World Health Organization released this statement:
WHO is aware of the proposed randomized controlled trial (RCT) on the hepatitis B birth dose vaccine in Guinea‑Bissau. Based on questions raised in publicly available information and consultation with relevant experts, WHO has significant concerns regarding the study’s scientific justification, ethical safeguards, and overall alignment with established principles for research involving human participants.
Number of UK universities opting out of Elsevier deal hits nine
While not all universities are signing up to agreements with all five publishers—with some also having opted out of deals in previous years—Elsevier appears to be seeing the greatest number of rejections, with universities criticising pricing and a perceived lack of willingness by the publisher to respond to research community asks on open access. Amid ongoing financial pressures on the UK higher education sector, there were expectations that some universities would not be able to afford to renew their deals.
JB: Last year the UK published 170,000 research articles. These nine universities appear on 15,500 (9%) of those articles (source: Dimensions).
AI help in grant proposals tied to higher funding odds at NIH
Scientists are increasingly turning to artificial-intelligence systems for help drafting the grant proposals that fund their careers, but preliminary data indicate that these tools might be pulling the focus of research towards safe, less-innovative ideas. These data provide evidence that AI-assisted proposals submitted to the US National Institutes of Health (NIH) are consistently less distinct from previous research than ones written without the use of AI — and are also slightly more likely to be funded.
JB: Would the same be true for research articles submitted to journals? Probably.
And finally…
I try to read Richard Horton’s column in The Lancet, the journal he edits, every week. The latest essay is Meditations of Melancholy.
The word that explains our current state of mind is anomie—according to Émile Durkheim, a serious disturbance of society’s collective order, a sudden rearrangement or transformation of the social body, a disruption producing a state of disorganisation that creates disappointment, exasperation, and suffering.
Richard reflects on a lecture he attended last year by Salim Abdool Karim entitled Science Under Threat.
Karim concluded his lecture with a to-do list of actions to defeat the assault on science. Recognise and understand the problem. Defend the role of the scientist in society. Fact-check everything. Rebuild trust. And remember that every act of resistance helps. A political strategist I know once said, “We will not win because we are right, we will win because we are organised.”
The academic publishing industry is failing miserably at “fact-check everything”. We need to be better organised. We need to rebuild trust.
Given the volume of research content that’s being created, we have to leverage technology to help. But that, in itself, creates problems, as Krishna Kumar Venkitachalam explored in his Scholarly Kitchen essay on AI anxiety.
Science operates in a strange space. The system is designed to be ruthless, unemotional, and evidence-based. Yet the individuals running it are warm-blooded, vulnerable, emotional, gut-feeling-based humans. So far, we’ve balanced this such that the system stays rigorous while humans stay involved enough. AI threatens to disrupt this balance by replacing human interactions that happen in the margins. From my perspective, role redundancy is less concerning than what happens when we stop interacting with each other and start interfacing primarily with systems.
Curiosity and inquiry are unique human characteristics. Science is under threat, not just from politicians but also from within. Have we truly recognised and understood the problems we’re facing? When quantity matters more than quality — when output is the primary measure of success, for both researchers and for publishers — we risk losing sight of what really matters: a trusted role for science, and academic research more broadly, in society.
Until next time,
James

