Journalology #142: Uprightness potentate
Hello fellow journalologists,
Tomorrow I’m travelling down to London for the Researcher to Reader conference. Hopefully I’ll see some Journalology readers there.
Today’s newsletter includes snippets from news stories, written by journalists, that are likely to have broad appeal. This section is ‘free to read’ by everyone who has signed up to the Journalology newsletter.
On Tuesday, paid subscribers will receive an additional newsletter that contains a deep dive into the news that matters for publishing professionals. You can upgrade your subscription here.
News headlines
Journal giant Elsevier unveiled an AI tool that scans millions of paywalled papers. Is it worth it?
LeapSpace is offering what appears to be one of the largest corpuses of full-text, paywalled papers and books, totaling 18 million. The AI can access articles from Elsevier’s own collection and those of its four partners: Emerald, the Institute of Physics, the New England Journal of Medicine Group, and Sage Publications. (It pays its partners a royalty per use, and the tool gives their articles more exposure.) Elsevier has promised that the analytical reports will not favor citations to its own content, and that users’ queries will be kept private and not used to train the proprietary LLMs, from the OpenAI group, that support LeapSpace.
JB: This news story in Science provides a useful overview of Elsevier’s LeapSpace tool, which I’ve covered in this newsletter previously.
Elsevier publishes more Q1 impact factor journals than any other publisher and is best placed to make a subscription product like this work. Erik Engstrom, the RELX CEO, is quoted at the end of the news story:
Engstrom also said that, unlike other publishers that have licensed their content to AI developers, Elsevier plans to limit such sharing because it believes automated analysis of its content will be a core part of the company’s future. “We have a content advantage that we believe is very sustainable and very strong.”
‘Retract papers based on flawed citations’, urges integrity tsar
Academic publishers should also make greater efforts to remove papers citing multiple retracted studies, continued Cabanac. “This is something I find infuriating – publishers should be accountable for what appears in their books, journals and conference papers. They should be constantly reassessing their materials and removing them when problems arise… but I know of no publisher which has set up a task force to check the bibliographies of the work they publish and sell,” he said.
“It’s difficult to do but I have done the work for them,” he said, referring to the publicly available cache of data on his Problematic Paper Screener portal established in February 2021.
JB: Guillaume Cabanac, who uses tortured phrases to identify dubious papers, is the “integrity tsar” mentioned in the headline. “Uprightness potentate”, would be a more appropriate moniker, surely?
Guillaume also appeared in another news story: Correction to a retraction highlights tortured phrases have been around longer than LLMs.
To his knowledge, LLMs don’t produce tortured phrases, Cabanac said in a follow-up email to us. Sage issued a correction to its retraction in January, stating the original notice “incorrectly cited the origin of tortured phrases to the use of a large language model.”
Nanoscience is latest discipline to embrace large-scale replication efforts
Although replication efforts such as these have become more common over the past decade, work that highlights problems and mistakes is not yet widely viewed as a legitimately mainstream part of research, argues Malte Elson, a psychologist and metascientist at the University of Bern. “I think there’s still a great stigma in your work not being replicated or somebody finding an error,” he says. “This is a very regrettable sort of cultural phenomenon. Part of discovering truth is being able to say when something is wrong.”
Stone, parchment or laser-written glass? Scientists find new way to preserve data
Writing in the journal Nature, Black and colleagues report how the system works by turning data – in the form of bits – into groups of symbols, which are then encoded as tiny deformations, or voxels, within a piece of glass using a femtosecond laser. Several hundred layers of these voxels, Black notes, can be made within 2mm of glass. The system uses a single laser pulse to make each voxel, making it highly efficient. By splitting the laser into four independent beams writing at the same time, the team say the technology can record 65.9m bits per second. The researchers found they could store 4.84TB of data in a 12 sq cm piece of fused silica glass, 2mm deep – about the same amount of information that is held in 2m printed books, an accompanying article by researchers in China notes.
JB: You can read the research paper here and the accompanying News & Views article here, which concludes with this heady possibility:
Silica unites performance, durability and practical feasibility, transforming a laboratory concept into a viable solution for preserving the records of human civilization. If implemented at scale, it could represent a milestone in the history of knowledge storage, akin to oracle bones, medieval parchment or the modern hard drive. One day, a single piece of glass might carry the torch of human culture and knowledge across millennia.
OSTP to review potential “repeal” of Nelson Memo
Tucked into the “Joint Explanatory Statement” accompanying the Appropriations “Minibus” passed in January is a non-binding provision requesting that the White House Office of Science and Technology Policy (OSTP) report on the status of a “process of repealing the August 25, 2022, Memorandum to Executive Departments and Agencies entitled, ‘Ensuring Free, Immediate, and Equitable Access to Federally Funded Research,’” commonly known as the Nelson memo. STM has learned that the current Administration has expressed concerns about the Nelson memo and is reviewing potential options.
JB: The Nelson memo was the biggest story in academic publishing in 2022. US science has been turned upside down since then. A year ago David Crotty, who is now Executive Director of Cold Spring Harbor Laboratory Press, wrote:
Been thinking a lot about the incredible amount of wasted time, effort, and money that publishers and publishing societies have spent chasing the short-lived whims of research funders. Think about how much work went into the whole “Transformative Journal” thing, only to see that eventually squashed by the very funders who proposed it. Or if you flipped your whole portfolio to fully-OA because of the Gates policy and now they tell you they will no longer pay APCs. In the US with the imminent death of the Nelson Memo, how much time and effort has been invested in strategies and products to meet the requirements of that policy which will now likely be put on hold or abandoned altogether?
Publishers of all types will be wary of changing their strategies in response to future funder mandates, I suspect.
Can we predict which chemistry research projects will pay off?
A new metrics-based approach aims to analyze paper citations, patents, news articles, and more to predict areas that are primed to provide the most benefit for the research dollar. Experts caution that the tool and its focus on metrics may not provide the level of desired insight, especially when it comes to predicting societal impact.
Microsoft has a new plan to prove what’s real and what’s AI online
It is into this mess that Microsoft has put forward a blueprint, shared with MIT Technology Review, for how to prove what’s real online. An AI safety research team at the company recently evaluated how methods for documenting digital manipulation are faring against today’s most worrying AI developments, like interactive deepfakes and widely accessible hyperrealistic models. It then recommended technical standards that can be adopted by AI companies and social media platforms.
JB: You can read the Microsoft report here: Media Integrity and Authentication: Status, Directions, and Futures.
‘Toxic culture’ caused by REF pressure to target top journals
Academics claim they are coming under growing pressure to publish in highly rated publications ahead of the Research Excellence Framework (REF), contributing to a “target-driven” culture that discriminates against other types of research outputs. In a highly critical report into working practices at the University of Liverpool Management School prepared by University and College Union (UCU) representatives, policies that require staff to publish in journals designated 3* or 4* by the Academic Journal Group and in those on the FT50 list are cited as a source of “considerable apprehension” for a majority of staff.
JB: The Academic Journal Guide and FT Research Rank (see here for an explanation) are hugely influential in economics, business and management research.
And finally…
A former colleague of mine, Helen Pearson, has written a book that will be published in April and will likely appeal to many readers of this newsletter.
Here’s an excerpt from the blurb of Beyond Belief: How Evidence Shows What Really Works:
Today, more and more people around the globe are using scientific evidence to figure out what works—in health, government and business as well as conservation, schools and parenting. This wasn’t always the case. This book tells the story of the evidence revolution—a worldwide movement that promotes evidence-based thinking—and shows how it can help us all, especially in an age of alternative facts.
Helen is the former Chief Magazine Editor of Nature and is Honorary Professor of Practice at University College London. I always enjoy reading Helen’s journalistic work and I’m sure you will too. You can preorder the book on Amazon and other retailers or directly from Princeton University Press (the reviews are impressive).
Until next time,
James

