ERROR 1
ERROR 1
ERROR 2
ERROR 2
ERROR 2
ERROR 2
ERROR 2
Password and Confirm password must match.
If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)
ERROR 2
ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.
The Presidential Commission to Make America Healthy Again (MAHA) released a report on children’s health last month, and it uses the word “crisis” no fewer than 40 times within 72 pages. One of these crises, according to the dossier, is the “replication crisis.” The idea that a research paper’s methods and findings can be duplicated in a new experiment carried out by another, independent laboratory is a core tenet of science, which adherents of the MAHA movement say has been compromised. Addressing the “replication crisis” is the first of 10 next steps recommended at the end of the MAHA report.
“Replicable, reproducible and generalizable research must serve as the basis for truth,” says US National Institutes of Health (NIH) director Jay Bhattacharya via a spokesperson. “Unfortunately, many research findings are not reproducible,” he continues. “This is not a moral failing of individuals but rather a systemic issue that places too much pressure on publishing only favorable results.” He says he is actively exploring ways to reward scientists who make it easier for their peers to replicate their work.
In June 2025, the Paragon Health Institute—a think tank set up by a former health policy adviser to Donald J. Trump—published a separate report with suggestions for how Bhattacharya might achieve his aim. One of its key recommendations is that the NIH should be mandated to devote at least 0.1% of its annual budget, or about $48 million under its fiscal 2025 appropriation, to fund replication studies—in other words, research that specifically tries to recreate the findings and sometimes the methods of previously published work. If the NIH doesn’t change its “antiquated structure and set of internal policies,” reads the Paragon report, it threatens “the future of scientific research and U.S. leadership in the sciences.”
There are data to lend validity to those who are worried about the difficulty of replicating experiments.
In a 2021 study, a collection of researchers from the nonprofit Center for Open Science attempted to replicate 53 different cancer research studies. Their success rate was just 46%. But that doesn’t necessarily mean the word “crisis” is warranted, says Brian Nosek, one of the study’s authors and the executive director of the Center for Open Science.
“If we define a crisis as something that’s changing for the worse, we don’t have the evidence to say that’s happening,” says Nosek. It’s not implausible to argue that things are getting worse, he says, but the data on that aren’t clear enough to give a concrete answer.
It’s difficult to say what the ideal percentage of success would be in replication studies like Nosek’s; it’s unlikely that 100% would ever be achieved. Nosek says we should expect high levels of reproducibility for findings that are translated into government policy, but we could tolerate lower reproducibility for more exploratory research.
“There’s no hard-and-fast target,” says Stuart Buck, who authored the Paragon Health Institute report. “But I’d argue that we should expect more like 80–90% of science to be replicable.”
Whether research replicability is getting worse or not, Nosek and other scientists agree at least in part with the MAHA report that the difficulties surrounding replication are real and substantial. The situation is complex; there’s no single cause and therefore no single solution, and the problem needs to be taken seriously. But none of this means that science is to be disregarded if it can’t be replicated, says Erica Goldman, director of policy entrepreneurship at the Federation of American Scientists (FAS).
“Science is not bad; it’s just flawed,” she says. “But ultimately the goal of the science community is to improve it.”
The fact that scientists and academics have been highlighting the reproducibility problem for decades is paradoxically comforting. “Science is trustworthy because it doesn’t trust itself,” Nosek says.
So why is the White House interested in the issue of reproducibility now? Nosek wonders. The simultaneous funding cuts to research projects makes it tough for him to believe that the MAHA report is a genuine attempt to enable science to address its structural flaws, he says.
“It’s hard to read it as an investment in wanting science to improve,” Nosek says; he criticizes the report for seeking to demand perfect evidence from science. He argues that perfection can never be achieved, even if the concerns around reproducibility are solved.
Nosek also fears that the result of the MAHA movement could be that the federal government demands an impossibly high quality of evidence, leading to good data being disregarded because they inevitably cannot measure up to perfection. “The scientific method is not fixed. Fundamentally, the quality of evidence is always improvable,” he says.
Goldman is meanwhile cautiously optimistic. “If the [MAHA report] is legitimately a request for reform, rather than a justification for just saying science is bad and let’s do away with it, then there are a lot of people and organizations, FAS included, who have good ideas for how to fix it and are willing to package them up and work with the administration to wrestle with this,” she says.
Buck, however, says it’s right for the administration to tackle this issue now. “It’s important for the public to be broadly able to trust that many billions of dollars in tax are well spent.”
One of the best ways to tackle the replicability problem, Nosek says, is to reinvent the incentives that drive science forward.
“Publication is the currency of advancement in science,” Nosek says. “Some things are more likely to be published than other things. For sure, it’s more likely if it’s a novel result rather than getting additional evidence for a prior claim.” That preference creates tension with the lofty ideals of what science actually is: the pursuit of empirical truth whether the result is exciting or not.
“The reward system for science is not necessarily aligned with scientific values,” Nosek says. Getting negative or null results published, for example, is rare. So that puts pressure on scientists to find a positive result. That’s bad for reproducibility in two ways.
Firstly, it creates an incentive for scientists to selectively choose which data points to pay attention to. In the extreme, it can even entice some people to fabricate results. A 2024 meta-analysis of 75,000 studies across a broad range of fields, including the biomedical sciences and cell biology, concluded that results from as many as one in seven may have been at least partially faked. Previous evaluations, however, have estimated lower figures, of between 2% and 8%, for falsified results. Whatever the true scale of scientific fraud, it is likely to explain some of the difficulties that scientists come across in attempting to reproduce their peers’ experiments.
Secondly, the drive for publication creates a culture of competition, not cooperation, among researchers. That doesn’t exactly motivate people to be as open as possible in describing their methodology, Nosek says, because it can feel like handing your know-how to the competition. “We’ve set up a system where it might not be in my interest to show you everything.”
Research funders also have a part to play in this quagmire, says Thomas Powers, who directs the University of Delaware’s Center for Science, Ethics, and Public Policy. “The funding agencies got tired of the replication crisis,” he says. “They got tired of funding science that’s already been done.” With that in mind, the Paragon Health Institute’s recommendation that the NIH be forced to spend some of its budget on replication studies could be helpful.
Nosek also wants academia to create clear career paths for people doing replication studies—it should be something that university bosses admire and want to promote.
Both Goldman and Nosek want preregistration of research to become the norm. The idea would be that researchers approach a journal before they even start collecting data. “They’d tell them how and what they’re going do,” says Goldman. From that, the journal would decide whether to commit to publishing the research before knowing the outcome.
That would mean that negative findings, in other words, those that do not support the working hypothesis, would be published alongside positive findings. If scientists were guaranteed publication regardless of their results, it would make them less competitive with one another—so the theory goes—thereby making them more likely to be open with each other about their methodologies and making it easier for science to be replicated and reproduced. For the replication crisis to be solved, say Goldman and Nosek, the money needs to be there; the incentives of science need to change; and governments need to engage with the problem in earnest.
Join the conversation
Contact the reporter
Submit a Letter to the Editor for publication
Engage with us on X