Chemistry matters. Join us to get the news you need.

If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)

ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.



It’s time for positive action on negative results

Perspectives: Structural biologist suggests that chemists can help redefine good research behavior by telling the whole story

by Stephen Curry
March 7, 2016 | APPEARED IN VOLUME 94, ISSUE 10

Credit: Steve Ritter/Yang Ku/C&EN

Publish or perish. The reality of research is not as brutal as this infamous dictum might suggest. But even though your life may not be at stake, your livelihood probably is if you don’t comply with the norms of the scientific world.

It is reasonable to expect researchers to produce papers. Yet, as the audit culture that has flooded many areas of human activity soaks into the fabric of academia, researchers are increasingly immersed in the metric tide. Papers have come to mean accumulating points: impact factors, citation counts, h indexes, and university league table rankings. We now find ourselves subjugated to a system of incentives that is inefficiently geared toward the engine of discovery and communication. Let’s face it, the publication system is misfiring.

No one intended for things to turn out this way. But there is a widespread sense that we have gotten lost. When that happens, it is sensible to pause and take stock.

Stephen Curry is a professor of structural biology at Imperial College London, where he studies the replication mechanisms of RNA viruses. He also writes regularly on science and the scientific life at the Guardian and his Reciprocal Space blog.

What is the purpose of a research publication? For sure it is to claim priority and demonstrate originality. We should not be squeamish about the egocentric forces at play here. Publishing also serves, ideally, to map out the territory of our understanding and to inform others so that guided by earlier findings we might penetrate deeper in subsequent forays.

The researchers with the biggest discoveries, however, win the most space in journals, which are keen to burnish their reputations as the repositories of great scientific advances. The reports of those who return empty-handed from the lab, however smart or well-crafted the experiment, rarely make it to the pages of the scientific literature.

And that’s a problem. It creates a publication bias that fills the literature with positive findings by systematically excluding negative results. No one likes negative results or seeks them out, regardless of what science philosopher Karl Popper has told us about the value of falsifying hypotheses rather than proving them.

Nevertheless, negative results do provide a useful guide for ideas and experiments that have been tried and found wanting. By not publishing what didn’t work, we condemn our colleagues to inefficiency, keeping them in ignorance of the lessons we have learned.

We also run the risk of undermining public trust if we fail to provide a full account of research that is, for the most part, publicly funded. In an open society, it is in the interest of the research community to be up front about the fallible side of science. Worse still, as the AllTrials campaign to ensure the registration and full reporting of all clinical trials shows, selective publication of positive results of trials of new drugs and treatments can do real harm. People die, as was revealed in the multi-billion-dollar lawsuits against Merck & Co. over its Vioxx painkiller and GlaxoSmithKline over the diabetes drug Avandia.

Until recently, the reasons for not publishing negative results were clear. Publishers don’t like them because they don’t attract attention or citations and so threaten journal impact factors. And reviewers, typically asked to gauge the significance of a piece of work, have become conditioned to see importance more readily in positive outcomes.

It doesn’t have to be this way. The open access movement has given rise to new models of publication that judge research work not on significance but solely on originality and competence. That is now giving us new avenues for publishing negative results, such as PLOS One, F1000 Research, Peer J, Scientific Reports, and the recently announced ACS Omega. Uptake of these new author-pays megajournals should grow as they compete on price for the services offered to researchers.

It was the competitive rates offered by that induced me to publish negative results from my laboratory for the first time last year, work on the crystal structure of a norovirus protease. Prior to that, our failed experiments languished in laboratory notebooks, increasingly forgotten as students and postdocs rotated through the lab. There seemed to be no way to get them published and no reason to do so.

The rise of free preprint archives in the life sciences, such as bioRxiv, has also caught my attention and arguably creates an appealing venue for ensuring that negative results can serve as signposts for other researchers. Such efforts are not peer-reviewed, but after 20 years, the arXiv preprint repository used by various disciplines of physics, mathematics, and computer science shows that they can provide a valuable resource. All we need now is a preprint archive for chemists and chemical engineers.

That said, it will take more than technological or editorial changes to properly reform scientific publishing. For one thing, confusion about open access abounds. The author-pays model of open access journals has stirred concerns about quality control that need to be addressed. In practice, these can be tackled by clear separation of editorial decision-making from questions of payment, encouragement of postpublication review (all the more effective thanks to the worldwide readership offered by open access), and provision of clear advice on journal quality through publication services such as Think. Check. Submit. and the Directory of Open Access Journals.

We also need to redefine good researcher behavior and incentivize it. At present the focus is on publishing “well.” But we need to be more holistic in determining what that means, particularly if we want to retain the trust of funders and the public. We should find ways to reward researchers who publish high-quality research that is reproducible and who publish rapidly, openly, and completely, including all their data and negative results. Publishing will always be an ego trip. But it is also a professional duty owed to our colleagues and to the wider world.

It Didn’t Work: Have you published negative results? If so, tell us where you published them and share the impact they had below:

Views expressed on this page are those of the author and not necessarily those of ACS.



This article has been sent to the following recipient:

T Foo (March 9, 2016 4:18 PM)
Actually, results are just results. The responses of people are the things that are "positive" or negative." In general, if the physical world is not as the researchers would like to view it, or if nature cannot be manipulated in the way the researchers would like to, the response is negative. In reality, experimental results are sources of learning, whether the learning is profound or mundane.
L Ellis (March 23, 2016 11:08 AM)
Hi T Foo,

In many areas of science, yes, results are results regardless of whether they matched your expectations. I think there are some areas however, like experimental physics and engineering, where having journal articles express what doesn't work would be very beneficial! If research groups are attempting the same setup that has already been tried and discarded by someone else, it would save a lot of time and money to know that.
Heather Montgomery (March 10, 2016 8:42 AM)
Reading your article on the need for chemists to publish negative results, it appears that many of the points you mention are being address by Royal Society Open Science ( This Open Access journal covers the whole of science, however for articles in the chemistry section, the Royal Society of Chemistry is working in collaboration with the Royal Society to manage the commissioning and peer review processes.
Royal Society Open Science actively encourages submissions of negative findings, meta analyses and studies testing reproducibility of significant work. The journal operates objective peer review, with any judgement of potential impact left to the reader. In addition there is an option to choose open peer review whereby referees sign their reports, and their reports are published along with author responses. Finally, post-publication commenting is available on the article page for all published articles.
Currently Article Processing Charges are being waived so the articles being published are both free to read and free to publish. We hope that your readers may take up the opportunity to publish their results – either negative or positive – in Royal Society Open Science.

Heather Montgomery
Deputy Editor Royal Society Open Science (chemistry)
Stephen Curry (March 15, 2016 1:41 PM)
Thanks for the information. It will be good to have more competitors for ACS Omega (mentioned in this piece), which I hope might lead to better value for money for Chemists seeking to publish in OA venues. For negative results we need the price barriers to be as low as possible.
Chemonmor Campbell (March 18, 2016 10:58 PM)
I believe scientists/researchers do publish negative results. It's just that sometimes during the write - up, research questions and topics are revised and phrased in such a way, they don't seem like negative results. But as T Foo said, research is supposed to be a learning experience and seeking to expand on your knowledge....there shouldn't be anything negative about it.

Chemonmor Campbell
Doctoral Student
Sneha Kulkarni (March 29, 2016 5:58 AM)
One of the biggest incentives to publish the so-called “negative results” would be considering all results as just results without assigning them the tags of negative and positive. Discovering what does not work opens new avenues to find what works. This demonstrates originality, which as you rightly pointed out in the article, is one of the motivations behind publication. Self-correction is considered one of the hallmarks of science and “negative” results can act as a catalyst in ensuring consistent scientific progress.

Leave A Comment

*Required to comment