Advertisement

If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)

ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.

ENJOY UNLIMITED ACCES TO C&EN

Publishing

Many publishers lack policies on using AI chatbots to write papers

Study highlights need for industry standards in evolving space

by Dalmeet Singh Chawla, special to C&EN
August 14, 2024 | A version of this story appeared in Volume 102, Issue 25

 

A smiley illustration of a robot face is overlayed onto a background showing hand drawn chemical structures and a pen laying on an article printout. The inside of the robot face is colorful and shows a glowed had drawing more chemical structrues on a white board.
Credit: Madeline Monroe/C&EN/Shutterstock

Many scholarly publishers still don’t have policies on whether researchers can submit papers written by artificial intelligence chatbots like ChatGPT.

Jeremy Y. Ng, a metascientist who works at the Centre for Journalology at the Ottawa Hospital Research Institute, and colleagues audited the publicly available policies of 163 members of the International Association of Scientific, Technical, and Medical Publishers (STM). The researchers report in a preprint article—published before peer review—that of those 163 members, only 56 had a policy on whether authors can submit papers written by AI chatbots (medRxiv 2024, 10.1101/2024.06.19.24309148). Forty-nine of those 56 publishers required authors to declare when they used chatbots, and none allowed researchers to list AI tools like ChatGPT as an author—which has happened in some cases.

“The use of AI chatbots in academic publishing is a new and rapidly evolving space,” Ng says. “The absence of industry-wide standards or guidelines also contributes to the slow adoption, as publishers may be hesitant to implement policies without a clear framework to follow.”

Four of the publishers surveyed had an outright ban on using chatbots. But one of those, the American Association for the Advancement of Science, which publishes Science, has since reversed its ban.

The analysis revealed that 19 surveyed publishers said researchers should not cite chatbots as primary sources. Eighteen allowed the use of chatbots in research methods such as for data organization, and 33 publishers permitted researchers to use chatbots to help write non-methods sections, including the background and introduction of manuscripts. Fourteen publishers said authors could use AI to generate images, while 15 allowed authors to use chatbots to proofread manuscripts.

The American Chemical Society and the Royal Society of Chemistry, both of which are STM members, don’t permit chatbots to be listed as authors but require researchers to acknowledge when they use chatbots in preparing manuscripts. (ACS also publishes C&EN but is not involved in editorial decisions.)

“Every publisher should have a policy on the attribution and use of these automated tools because they are attractive to researchers, but their use has so many caveats,” says Matt Hodgkinson, ethics adviser at MyRA, a generative AI–powered app that suggests key themes from uploaded texts such as interview transcripts.

But Hodgkinson questions whether the numbers in the study are accurate. Some academic publishers aren’t STM members, he says, and many STM members are trade bodies and vendors rather than publishers and so won’t have policies on AI chatbots. “The authors [of the study] need to check every included ‘journal publisher,’ and accounting for these issues will seriously shift the percentages,” he says.

Hodgkinson says a key missing part of the study is the use of generative AI by peer reviewers or editors. Some studies have already demonstrated that peer reviewers may be using ChatGPT. “This is a major worry for the validity and rigor of peer review,” he says.

In previous work, Ng and his team found that many authors may not be familiar with AI chatbots, perhaps because of a lack of training. Ng suspects that as a result, researchers may also be unfamiliar with publishers’ policies on generative AI. “Continuous work in this field is crucial to develop evidence-based policies that ensure ethical standards, transparency, and quality are maintained,” Ng says.

CORRECTION:

This article was updated on Aug. 14, 2024, to correct the description of the MyRA app. It was not built by the firm Clarivate.

Article:

This article has been sent to the following recipient:

0 /1 FREE ARTICLES LEFT THIS MONTH Remaining
Chemistry matters. Join us to get the news you need.