ERROR 1
ERROR 1
ERROR 2
ERROR 2
ERROR 2
ERROR 2
ERROR 2
Password and Confirm password must match.
If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)
ERROR 2
ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.
World Chemical Outlook 2024
When ChatGPT’s popularity among researchers skyrocketed last year, a number of academic publishers and scientific journals quickly updated their policies to specify how study authors could use artificial intelligence tools in their research and publications.
➡ Academic publishers and scientific journals issued rules to govern the use of ChatGPT and other AI tools in scientific publications last year.
➡ The lack of consistency between existing AI-related guidance could create confusion in the scientific community.
➡ A standardized set of guidelines for researchers on how to use and disclose AI in their work is expected to be published this year.
But “different publishers took different stances,” says Marie Soulière, an elected council member of the Committee on Publication Ethics (COPE). For example, some journals simply limited how AI could be used during the manuscript preparation process. Others, like Science, completely banned the use of generative AI tools. But theScience family of journals has since eased this restriction.
Giovanni Cacciamani, a professor of urology research at the University of Southern California’s Keck School of Medicine, worries that the lack of consistency among journals and publishers will create confusion in the scientific community and make it harder to define the dos and don’ts of AI use in research and scholarly publishing. So Cacciamani and colleagues plan to publish in early 2024 a standardized set of guidelines known as the ChatGPT, Generative Artificial Intelligence, and Natural Large Language Models for Accountable Reporting and Use Guidelines, or amusingly, CANGARU Guidelines.
The aim of the CANGARU Guidelines is to tell scientists “what to avoid to ensure the ethical, proper use of GPT [generative pretrained transformer] technologies in academics,” Cacciamani says. The guidelines will also provide a template for authors to follow for disclosing the use of AI in their manuscripts.
“This consensus will be really helpful for everybody,” Soulière says.
The group drafted the guidelines according to existing AI-related policies from publishers and a survey of publishers, journal editors, and publishing regulators, including COPE, that Cacciamani and his team conducted. They have now invited over 100,000 published scientists to provide additional feedback, though Cacciamani expects only a small percentage to respond.
He says he’s confident that, once the guidelines are published, the publishers and regulatory agencies he’s worked with will endorse them and that others in the publishing industry will eventually follow suit.
But CANGARU is unlikely to be the only project attempting to standardize AI-related guidance this year. There are “many initiatives going on, and they will all sort of probably say slightly different things,” says Bruno Ehrler, a researcher at Amolf who has written about how AI could affect scientific publishing (ACS Energy Lett. 2023, DOI: 10.1021/acsenergylett.2c02828).
Join the conversation
Contact the reporter
Submit a Letter to the Editor for publication
Engage with us on X