Advertisement

If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)

ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.

ENJOY UNLIMITED ACCES TO C&EN

Publishing

Researchers plan to release guidelines for the use of AI in publishing

A standardized set of dos and don’ts could replace the hodgepodge of rules released by journals and publishers last year

by Krystal Vasquez
January 19, 2024 | A version of this story appeared in Volume 102, Issue 2

 

A robot hand holding a pencil with books in the background.
Credit: Madeline Monroe/C&EN/Shutterstock
New standardized guidelines for AI use in scholarly publishing are set to come out this year.

When ChatGPT’s popularity among researchers skyrocketed last year, a number of academic publishers and scientific journals quickly updated their policies to specify how study authors could use artificial intelligence tools in their research and publications.

Takeaways

Academic publishers and scientific journals issued rules to govern the use of ChatGPT and other AI tools in scientific publications last year.

The lack of consistency between existing AI-related guidance could create confusion in the scientific community.

A standardized set of guidelines for researchers on how to use and disclose AI in their work is expected to be published this year.

But “different publishers took different stances,” says Marie Soulière, an elected council member of the Committee on Publication Ethics (COPE). For example, some journals simply limited how AI could be used during the manuscript preparation process. Others, like Science, completely banned the use of generative AI tools. But theScience family of journals has since eased this restriction.

Giovanni Cacciamani, a professor of urology research at the University of Southern California’s Keck School of Medicine, worries that the lack of consistency among journals and publishers will create confusion in the scientific community and make it harder to define the dos and don’ts of AI use in research and scholarly publishing. So Cacciamani and colleagues plan to publish in early 2024 a standardized set of guidelines known as the ChatGPT, Generative Artificial Intelligence, and Natural Large Language Models for Accountable Reporting and Use Guidelines, or amusingly, CANGARU Guidelines.

The aim of the CANGARU Guidelines is to tell scientists “what to avoid to ensure the ethical, proper use of GPT [generative pretrained transformer] technologies in academics,” Cacciamani says. The guidelines will also provide a template for authors to follow for disclosing the use of AI in their manuscripts.

“This consensus will be really helpful for everybody,” Soulière says.

Advertisement

The group drafted the guidelines according to existing AI-related policies from publishers and a survey of publishers, journal editors, and publishing regulators, including COPE, that Cacciamani and his team conducted. They have now invited over 100,000 published scientists to provide additional feedback, though Cacciamani expects only a small percentage to respond.

He says he’s confident that, once the guidelines are published, the publishers and regulatory agencies he’s worked with will endorse them and that others in the publishing industry will eventually follow suit.

But CANGARU is unlikely to be the only project attempting to standardize AI-related guidance this year. There are “many initiatives going on, and they will all sort of probably say slightly different things,” says Bruno Ehrler, a researcher at Amolf who has written about how AI could affect scientific publishing (ACS Energy Lett. 2023, DOI: 10.1021/acsenergylett.2c02828).

Article:

This article has been sent to the following recipient:

0 /1 FREE ARTICLES LEFT THIS MONTH Remaining
Chemistry matters. Join us to get the news you need.