Nature featured an interesting opinion piece on Feb. 19 by Australia’s chief scientific adviser, Alan Finkel (2019, DOI: 10.1038/d41586-019-00613-z). In his article, he talks about how the current incentive system in scientific research encourages quantity over quality and that, for the sake of science, the focus should shift away from metrics and toward best practice and better research.
“People respond to incentives. Change will come only when grants and promotions are contingent on best practice,” he writes.
Finkel suggests, among other measures, that when evaluating researchers’ performance for grants and promotions, funders and supervisors start by examining the top five most read and most cited articles the researchers have published in the last five years. Researchers would also submit a statement describing their work and how it has contributed to society. Contrary to common practice, this process would ignore the exact number of papers the researchers have published.
Finkel also calls for training for researchers on research integrity and data management as well as mentorship and leadership. Regarding mentorship, he proposes deprioritizing measuring head count—the number of people one supervises—as a performance metric, with mentors being judged instead “by impact statements about the projects and career progression of at least two PhD students.”
It’s good to hear someone in his position calling for such positive, fundamental change that has the potential to be so transformative. We have created a situation in which a paper count or a head count is used as a measurement of productivity, but these metrics have shifted to becoming goals in and of themselves. This is a bad place to be: when output is prioritized over foundational work and becomes the focus of our efforts, measuring output ceases to be effective.
This phenomenon—when a metric becomes the goal—is called Goodhart’s law. Oftentimes, human-resources professionals and those studying performance and productivity dynamics will mention it in conjunction with Campbell’s law, which states that the more a metric is used, the more likely it is to corrupt the process it is designed to monitor.
I’d say that both laws describe what we see and experience every day in science. Finkel is on point, and I think the vast majority of researchers would agree with deprioritizing metrics in favor of reporting impact. But as always, the scientific ecosystem is a complex one, and there are significant hurdles to flipping to this other way of doing things. Interestingly, Finkel suggests that funders should be the ones to push for these changes. That makes sense, but it is vital that consultation with the community occurs at every stage. When funders lead proposals for change, they should consult researchers to ensure buy-in and compliance. Not doing it puts broad acceptance at risk. A recent example of change led by funders is Plan S, an initiative that would require that results from research funded by public grants are published in open-access journals or platforms. What the plan will ultimately look like has yet to be determined, but one of the main criticisms is that there hasn’t been enough consultation before policies have been announced.
Finkel’s piece contrasts with another that reports on a proposal by an Indian government committee, in which PhD students would receive cash bonuses as incentives for publishing their work. Yet another example of Goodhart’s law at work. The committee is proposing that the government incentivize quantity over quality. Critics of the proposal suggest that such a measure would result in more cases of misconduct, unethical behavior, hyping of results, and the like. What happened to rigor, best practice, and good science?.