Hello fellow journalologists,
Here’s the gist of what’s happened in scholarly publishing in the past week. The full length version of Journalology will return later this month.
AI tool labels more than 1000 journals for ‘questionable,’ possibly shady practices. Out of a sample of 15,191 journals, his team estimates that the AI correctly classified 1092 as questionable but did the same for 345 journals that weren’t problematic, so-called false positives. It also failed to flag 1782 questionable journals—the false negatives. Additional tuning of the AI increased its sensitivity to questionable titles but also boosted the number of false negatives. Acuña says such trade-offs are inevitable, and the results “must be read as preliminary signals” meriting further investigation “rather than final verdicts.”
From Detection to Disclosure — Key Takeaways on AI Ethics from COPE’s Forum. The accuracy of AI content detectors varies wildly by discipline, training data, and how heavily a text has been edited, so false positives are common. If you use them, treat them as hints, not verdicts — a flag to trigger deeper checks alongside other clues — like fake references, suspect emails, manipulated images, or gibberish content. Detection tools can help identify suspect manuscripts, but they’re not foolproof and must be used in tandem with broader integrity checks.
Research posts on Bluesky are more original — and get better engagement. Posts about research on Bluesky receive substantially more attention than similar posts on X, formerly called Twitter, according to the first large-scale analysis of science content on Bluesky. The results suggest that Bluesky users engage with posts more than do users of X.
NIH Publisher Fee Cap Plan “Not Comprehensive Enough”. For example, one of the options in the NIH’s proposal would increase limits on APCs if the journal paid peer reviewers, but Marcum said he’s concerned that could result in some peer reviewers trying to game the system to enrich themselves. Instead, he said, “if the NIH really wants to move the needle on this, they should think about other ways to compensate reviewers.” Some of those ideas could include giving peer reviewers credit toward their grant applications, including peer review as part of grant work or requiring universities that apply for NIH grants to include considerations for their researchers to engage in peer review.
Unpaywall improvements: more gold, better green. As you can see, we’re moving in the correct direction when it comes to gold and hybrid, green isn’t changing, and bronze coverage is going backwards a bit, although it’s still pretty close to the ground truth number. Our roadmap will prioritize green and gold for the next few months at least.
Practical lessons for publishers. Involving patients in research design and review, and clearly stating how they were involved, signals openness, humility and respect. It shows readers that this is not research done on people, but research done with and for them. You don’t need to be a medical publisher to apply these ideas. Any publisher that creates content with real-world consequences, from education to policy to social care, can benefit from integrating lived experience.
And finally...
I enjoyed reading Amy Brand’s editorial The future of knowledge and who should control it. Perhaps you will too. Amy argues:
The question of how, and under what conditions, published science is used to train LLMs is not just about copyright. It is about who controls the future of knowledge. Do we cede authority to opaque, extractive industries with little accountability to the research community? Or do we build systems that preserve attribution, integrity, and sustainability? If we are serious about human flourishing, about evidence-based science, and about protecting the conditions under which knowledge grows, then the research community and its institutions must proceed with discernment.
Until next time,
James


