Advertisement

If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)

ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.

ENJOY UNLIMITED ACCES TO C&EN

Policy

The Reproducibility Problem

Misplaced research incentives and poor experimental design lead to studies that can’t be replicated

by Andrea Widener
May 26, 2014 | A version of this story appeared in Volume 92, Issue 21

Researchers had high hopes for a new treatment for the devastating disease amyotrophic lateral sclerosis (ALS) when the National Institute of Neurological Disorders & Stroke (NINDS) funded a new clinical trial in 2003. But the drug, minocycline, actually made patients worse.

Looking back, faulty underlying research may have been to blame, says Story C. Landis, director of the institute, which is part of the National Institutes of Health. Mouse studies that showed the drug was effective used only 10 animals, and that research wasn’t randomized or blinded. If NINDS had taken a hard look at those initial experiments, it might not have funded the larger trial in the first place, she says.

Reproducibility Resolution

Several ideas could help address the problem of studies that cannot be replicated:

◾ Supporting studies specifically designed to reproduce research and publish negative results

◾ Improving the publication system to encourage more critiques of study processes and results

◾ Ensuring that the review process for grants and articles examines the study design

◾ Designing lab management software to make research transparency easier (Open Science Framework)

◾ Training graduate students in good experimental design

Minocycline is just one of dozens of examples of promising drugs that have failed once tested in large, multi-million-dollar clinical drug trials. Although clinical trials can fail for many reasons, Landis and other experts say that research that has not been or cannot be reproduced is becoming an increasing problem both in drug trials and in the larger research community. Misplaced incentives lead scientists away from reproducing or correctly documenting their work. And bad experimental design often means that studies cannot be reproduced at all.

“The notion has existed for quite some time that science is self-correcting. While that principle remains true over the long term, over the short term that is not quite as true as we would like,” Landis says. “The checks and balances that would normally allow science to be self-correcting have been perturbed to the extent that we have a significant problem.”

Why the reproducibility problem exists and how to address it was the focus of two sessions at the American Association for the Advancement of Science’s annual Forum on Science & Technology Policy earlier this month.

Openness and reproducibility are the central values of science, says Brian A. Nosek, an associate professor of psychology at the University of Virginia who has been studying the problem. But they are not part of the daily practice or motivators for most scientists. “The incentives for me as an individual researcher are focused on getting published and getting continued funding, not on getting it right,” Nosek says.

The pressure to publish gets much of the blame, since tenure and grant decisions are often based on where a researcher publishes and how often. Scientists rarely publish or correctly document every study that is done in their lab. They focus instead on which studies are most likely to get published and on presenting their work in the best light, Nosek says. Negative studies or those that don’t present a compelling narrative rarely leave the lab.

“What’s good for the science—reporting research as accurately as possible—might not be as good for me,” he says.

Many of the reproducibility problems are related to bad study design, experts say. Researchers may choose outcomes that are easy to measure rather than those that are most relevant. They may set out specific protocols for a study but not follow them if they don’t get the results they want. They may not understand or seek out the best statistical techniques to analyze their data. And they may not confirm that their materials, such as cell lines, are what they think they are. All of these problems may lead them to draw conclusions that go beyond their findings.

Journals are trying to attack this problem in different ways, says Robert M. Golub, deputy editor at the Journal of the American Medical Association (JAMA). Together with other medical journals, JAMA has created guidelines for researchers performing clinical trials that include preregistration of published methods and protocols. JAMA also has a statistician review most papers.

But journal editors can’t be responsible for changing a study’s design. “By the time a study reaches an editor, it is too late. The study has already been done,” Golub explains. “Education is perhaps the most important for everybody: editors and researchers, as well as consumers of the literature.”

And education is where NIH is making changes, both in its peer review process and in its training efforts, Landis says. NIH is asking peer reviewers to be more critical about research grant proposals. In the case of major clinical trials, NIH itself may even replicate the foundational studies.

The agency is also currently designing a one-hour course on good experimental design for intramural graduate students and postdocs. If it is successful, NIH will require its externally supported students to get the same training.

Advertisement

Article:

This article has been sent to the following recipient:

0 /1 FREE ARTICLES LEFT THIS MONTH Remaining
Chemistry matters. Join us to get the news you need.