In 2005, John Ioannidis, a professor of medicine at Stanford University, opened a can of worms. In a paper published in PLOS Medicine, he argued that most published scholarly literature is false (DOI: 10.1371/journal.pmed.0020124).
To date, Ioannidis’s “landmark study” has attracted thousands of citations and helped solidify a whole field in its own right, says Jelte Wicherts, who studies research methodology at Tilburg University.
The use of scientific methodology to study science itself is called metascience. The discipline has become mainstream in recent years, tackling some of the thorniest problems science faces, including a lack of reproducibility of academic literature, biases in peer review, and the fair allocation of research funding. “Metascience is now a distinct species,” although it has ancestors in medical science, psychology, and other disciplines, Wicherts says.
Ioannidis, who launched the Meta-Research Innovation Center at Stanford (METRICS) in 2014, however, is hesitant to frame metaresearch as a separate field. “In a way, every researcher is a metaresearcher, since the issues involved are at the core of how to do science and apply the scientific method and maximize the yield of reproducible and useful information,” he says.
The new Research on Research Institute plans to focus on five areas:
Research: Support, expand, and build capacity for interdisciplinary, mixed-method research on research in and across research systems worldwide.
Translate: Connect academic capabilities to the data and analytical resources of the institute’s partners.
Innovate: Experiment, coproduce, and test new tools, indicators, funding modes, and evaluation frameworks.
Broker: Critically evaluate methods and support engagement with data and evidence by decision makers and wider society.
Facilitate: Create an independent space for learning, networking, and collaboration between researchers, policymakers, funders, and technologists.
Source: Research on Research Institute
On Sept. 30, metaresearch got another boost when an international coalition of policymakers, funders, universities, publishers, and researchers launched the Research on Research Institute (RoRI), which will be dedicated to tackling metascience questions on a mass scale. James Wilsdon, a research policy scholar at the University of Sheffield and the institute’s founding director, says that the scientific community is “woefully underinvesting” in metascience given the benefits it could have for the “efficiency, dynamism and sustainability” of the research enterprise.
The four founding partners of the institute are the biomedical research charity Wellcome Trust, Leiden University’s Centre for Science and Technology Studies, research technology firm Digital Science, and the University of Sheffield. Wilsdon says a consortium of additional strategic partners, including public and private organizations from 11 nations, are already onboard, and conversations are underway with more potential partners.
Marcus Munafò, a biological psychologist at the University of Bristol who launched the UK Reproducibility Network in 2018 and coauthored a manifesto for reproducible science in 2017, says this is an exciting time for metascience. The most positive change, he says, is that researchers are moving beyond investigating the presence and absence of reproducibility and replicability and thinking about potential interventions to improve the status quo.
RoRI plans to liaise with different stakeholders, broaden the metascience topics being investigated, and scale up initiatives to see if study results hold up in larger systems, Wilsdon says. He makes the case for large-scale collaborations by noting that cultural, institutional, and disciplinary differences may skew results one way or another.
The three broad headings under which RoRI is spearheading its initial projects are decision-making, careers, and culture. Wilsdon thinks the first few initiatives are likely to focus on investigating how funding applications pass peer review and how research money is allocated, including testing different funding models.
RoRI also plans to set up a funders lab where agencies would be free to share data, policies, and practices. Brian Nosek, founder and director of the Center for Open Science, likes the idea. “This is vitally important because no single funder can easily evaluate the impact of their policies,” he says, “But, with an ethos of open sharing, we can learn a lot with comparative investigations of existing practices.”
Nosek, who is also a psychologist at the University of Virginia, notes that assessing funder policies could change how institutions reward and evaluate researchers and their work. Tilburg University’s Wicherts notes that the current reward system encourages researchers to cut corners to churn out several subpar papers instead of fewer studies of high quality. “Change is coming, but there is still a lot of work to be done,” he says.
Another problem that is worth tackling is the amount of time academics waste writing grant proposals, RoRI director Wilsdon says. According to some estimates, researchers spend around half their time writing grant proposals and often end up over-promising what they can deliver. Although some competition is often good for various systems, overdoing it leads to side effects; in the case of scientific funding, side effects come in the form of too much focus on publishing in flashy journals and less emphasis on crucial replication studies. “Unfortunately, what’s best for scientists and what’s best for science are two different things,” Wicherts says. Some scientists have called to revamp the entire process and distribute money using a lottery, an idea that some funders are now testing.
For Ioannidis, RoRI is a much-needed initiative. “There is a rapidly growing community of metaresearchers and a very wide metaresearch agenda,” Ioannidis says. “It is important to see new efforts like RoRI that will help house and coordinate some of this ongoing metaresearch and make sure it is done in a rigorous way.”
Nosek agrees. He thinks there is growing interest in metascience for two reasons: it’s getting easier to acquire relevant data, and people are realizing that science is operating much less efficiently than it could be. “As a community, science has not done a good job of creating a feedback loop between critiques and actual policies and practices to shape how science gets done,” he says. “Luckily, the scientific community has enormous capacity to investigate the causes and potential solutions for that inefficiency.”