ERROR 1
ERROR 1
ERROR 2
ERROR 2
ERROR 2
ERROR 2
ERROR 2
Password and Confirm password must match.
If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)
ERROR 2
ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.
About 20,000 scientists are publishing an “implausibly high” number of papers in scholarly journals and have an unusually high number of new collaborators, a new study suggests.
The analysis, published in December in Accountability in Research, analyzed the publication patterns of around 200,000 researchers on Stanford University’s list of top 2% scientists, which is based on citation metrics (DOI: 10.1080/08989621.2024.2445280).
It found that around 10% of those on the list—around 20,000 scientists—published an improbable number of papers. Some produced hundreds of studies per year with hundreds to thousands of new coauthors annually.
“It turns out that researchers, particularly the younger ones, are being pressured into these sort of practices that prioritize quantity over quality,” says study coauthor Simone Pilia, a geoscientist at King Fahd University of Petroleum and Minerals (KFUPM). “That is threatening the very foundation of academic integrity.”
The 200,000 scientists studied by Pilia and his coauthor, Peter Mora, also at KFUPM, were from 22 different scientific disciplines and 174 subfields. The authors also studied the rates of publication and coauthorship among 462 Nobel laureates from the fields of physics, chemistry, medicine, and economics.
What surprised Pilia and Mora is the sheer number of authors who seem to be using unethical practices, such as coauthorship listing without adequate input to the research, to boost their publication numbers. Around 1,000 of them are early-career researchers who have worked in academia for 10 years or less.
“There is a system that is rewarding a superficial volume of quality work,” Pilia says. “When such patterns become normality, then it doesn’t just harm individuals, it completely devalues the academic process.”
To address the problem of inflated metrics, Pilia and Mora suggest adjusting or correcting metrics when researchers reach certain thresholds of published papers and coauthors. Doing so would reduce the value of high-volume publishing, Pilia says.
But Ludo Waltman, an information scientist who is the deputy director of the Centre for Science and Technology Studies at Leiden University and was not involved with the study, says he has “significant reservations” about the adjustment to metrics that the authors propose.
Instead, Waltman says, publishing metrics should play a modest role in research evaluation, and scientists should be evaluated on a broad range of research activities. “The metrics should be embedded in a process where experts, based on expert judgment, make decisions,” he says.
For Waltman, the study is problematic because it assumes that metrics play an important role in evaluating researchers. By adjusting or correcting existing metrics, Waltman says, the authors are introducing unnecessary complexity.
“Essentially, I think they are creating black boxes so a typical evaluator will not be able to really understand how these metrics work,” he says. “I think we need metrics that are really easy to understand, metrics that are fully transparent, and metrics that evaluators can link to the broader context that they take into account when they make decisions.”
Join the conversation
Contact the reporter
Submit a Letter to the Editor for publication
Engage with us on X