Advertisement

If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)

ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.

ENJOY UNLIMITED ACCES TO C&EN

Policy

NIH Grant Proposal Review Likely Picks Top Applicants, Study Finds

Research Funding: Highest-ranked projects linked to most publications and citations

by Andrea Widener
August 3, 2015 | A version of this story appeared in Volume 93, Issue 31

NIH grants at a glance

14.2 Average grant score awarded through peer review, 1980–2008

0–100 Possible scoring range, with 0 being the best

51,073 Applications for NIH research projects in 2014

9,241 Awards in 2014

18% Success rate in 2014

32% Success rate 15 years ago, 1999

$473,000 Average research grant size

SOURCES: Science 2015, DOI: 10.1126/science.aaa0185; NIH

Billions of federal dollars each year are awarded to scientists through peer review. The National Institutes of Health alone distributes 80% of its $30 billion budget through competitive grant applications. That has been standard operating procedure for decades.

But it’s hard to prove that peer review of grants actually selects the research projects likely to make a large impact on the field. Ideally, reviewers would be unbiased in choosing the most promising research projects—but they might let who they know or what type of science they prefer influence their decisions. Limited attempts to study the peer review process in the past have yielded mixed results.

A recently published examination of 28 years of NIH grant awards concludes that peer review does pick the highest-quality research. The analysis, conducted by two economists and published earlier this year in Science, shows a link between the applications that are highest ranked by reviewers and funded grants that produce the most publications, citations, and patents (DOI: 10.1126/science.aaa0185).

The study isn’t likely the last word on peer review because the correlation it found is small and it doesn’t account for variation in individual disciplines or the agency’s institutes. But it is important because it is the first to examine peer review across the NIH system, says Jeremy M. Berg, director of the Institute for Personalized Medicine at the University of Pittsburgh and a former NIH institute director.

“It is very reassuring that the peer review system does actually show some sort of a signal,” Berg says.

Report coauthor Danielle Li of Harvard University’s business school studies how the U.S. provides incentives for innovation. There is little evidence to say whether the U.S. government effectively promotes innovation, she says. That’s why she was interested in studying NIH, which is the largest nondefense supporter of academic scientists.

The agency awards grants primarily through reviews by small groups of scientists in so-called study sections focused on particular research topics. Selected scientists review applications and then rank them, which results in a percentile score between 0 and 100. The lower the score, the higher the reviewers regard an application.

Li and her coauthor, Leila Agha of Boston University’s School of Management, examined more than 137,000 grants awarded by NIH between 1980 and 2008. The authors then looked at three potential measures of quality: papers published within five years of the original grant award, citations for those papers, and patent applications related to that particular grant.

Li and Agha found a statistically significant link between a grant’s peer review score and the number of papers, citations, and patents it yielded. For example, a one-point change for the worse in a grant recipient’s peer review score was related to 1.6% fewer publications and 2% fewer citations. That link held even when the researchers controlled for factors that might influence reviewers, such as applicants’ publication and grant histories, educational backgrounds, or institutional affiliations.

Previous studies of peer review have usually looked at smaller slices of the NIH funding pie. For example, National Heart, Lung & Blood Institute Division Director Michael Lauer found that the peer review score did not predict a grant’s success at his institute. So the Science paper can’t be used to predict whether peer review can select the best grants in any particular field, Lauer says.

It also doesn’t demonstrate that reviewers can detect a quality difference between two closely ranked grants, Lauer says. “The ability of a funding agency to discriminate one grant that is likely to be productive from any one grant that is not is exceedingly low,” he says.

Berg points out that the link that Li and Agha found between peer review score and quality is strongest for the top-ranked grants, but this correlation becomes more tenuous for lower-ranked ones. For grants that fall close to that cutoff between successful and unsuccessful applications, “you know you are going to be leaving things unfunded that are just as good as the things that are being funded,” Berg says.

That’s a concern because researchers still don’t know whether the most innovative science is being funded through the peer review selection process, Berg says. The worry under the current system is that “people won’t propose things that are ‘out there’ because there is no chance of getting funded.” It’s also unclear whether papers, citations, and patents are really the best measures of quality.

The paper doesn’t attempt to demonstrate that peer review is the best way to choose grants, Li explains. To do so, a separate study would have to examine whether another method—for example, giving grants to researchers with the most cited publications—might better identify breakthrough ideas worthy of federal funding.

Even though this paper doesn’t explain everything, Lauer says it is an important step in applying science to how research is funded by the federal government. “We are thinking about how we conduct our work in a data-driven manner instead of having arguments about opinions and our own personal prejudices.”

Advertisement

Article:

This article has been sent to the following recipient:

0 /1 FREE ARTICLES LEFT THIS MONTH Remaining
Chemistry matters. Join us to get the news you need.