Researchers in Indonesia are up in arms about a metric introduced in 2017 by the nation’s government to measure productivity and performance of academic researchers and institutions—and that is now being used to decide which researchers should receive funding for research and scholarships.
Critics say the methodology and reasoning behind the metric, known as the Science and Technology Index (SINTA), are unclear. SINTA takes into account the number of journal and non-journal articles indexed in the database Scopus, the number of citations these documents accumulate in Scopus and Google Scholar, and researchers’ h-index. The h-index is another controversial metric that is designed to measure researchers’ productivity and the impact of their publications.
Over 150,000 researchers in Indonesia are already registered on SINTA.
Since its introduction, the metric has been gamed by some academics, who realized that Scopus-indexed papers that don’t undergo peer review still count towards SINTA scores (S-scores), says Surya Dalimunthe, an international consultant at the Islamic University of North Sumatra in Indonesia who is critical of SINTA.
In July, Indonesia’s Ministry of Research, Technology and Higher Education gave awards to researchers with the highest SINTA scores. The move sparked outrage among many scholars, who say the prize recognized those who abused the system.
Lukman Kemenristekdikti, head of scientific journals at the ministry and co-author of a 2018 study about SINTA, admits that 15 researchers have been sanctioned for inflating their S-scores using unethical practices. The ministry stopped funding these researchers and alerted their employers, Kemenristekdikti says.
Kemenristekdikti says the ministry plans to refine SINTA to also consider the quality of manuscripts instead of just quantity. Tweaks to the formula underlying SINTA will also mean that self-citations and citation cartels—groups of researchers who deliberately cite each other excessively with the intent of boosting one another’s numbers—are weeded out from calculations, he adds.
Detractors counter that individual researchers should also be evaluated using qualitative judgement, in line with the Leiden Manifesto for research metrics. Although qualitative assessment may incorporate quantitative statistics, “The S-score appears to be based on the idea that research evaluation of individual researchers can be done in a purely quantitative way,” says Ludo Waltman, deputy director of the Centre for Science and Technology Studies at Leiden University, who co-authored the manifesto. “This is a dangerous idea.”
Waltman’s concerns include the fact that SINTA does not account for varying publication and co-authorship practices in different fields. “This results in a strong incentive to add authors to publications even though they did not make a contribution,” he says. Also, other important research contributions, such as peer review and data collection, are not rewarded under the system, Waltman adds.
Basing SINTA on only one database is also a limitation, Dalimunthe says, since not all journals are indexed in Scopus. But SINTA will integrate with other databases like Web of Science, DOI registry Crossref, and unique scientist identifier ORCID in the future, Kemenristekdikti claims.
Tatas Brotosudarmo, president of the Indonesian Chemical Society, thinks SINTA is overall helpful, especially since Indonesia has more than 3,000 universities of varying quality. He notes that SINTA may be useful for identifying which assistant or associate professors are most worthy for promotion, and the metric may also help track researchers’ records after promotions.
Nevertheless, Dalimunthe and his colleagues are calling for the government to abandon SINTA and adapt a policy that would incentivize more openness and clarity around research practices. One example of such a policy they cite is the US Center for Open Science’s Transparency and Openness Promotion guidelines.