Advertisement

If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)

ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.

ENJOY UNLIMITED ACCES TO C&EN

Science Communication

Simple computer tool detects AI-generated science writing

Being able to differentiate human-written versus AI-generated text will allow scientists to study how AI tools might affect scholarly publishing

by Alla Katsnelson, special to C&EN
June 8, 2023 | A version of this story appeared in Volume 101, Issue 19

 

Scientists are already exploring using ChatGPT and similar artificial intelligence models in writing research papers, reviews, and grant applications. But how these bots will affect scientific publishing is unknown, says Heather Desaire, a chemist at the University of Kansas. “Is this just augmented spell-check that’s going to make people’s life better, or is this going to destroy scientific publication by adding in a bunch of misinformation?” she asks.

That’s why Desaire and her colleagues have developed a simple tool that can help explore the technology’s impact on academic publishing (Cell Rep. Phys. Sci. 2023, DOI: 10.1016/j.xcrp.2023.101426). To make it, they compared review articles written by scientists with scientific writing produced by ChatGPT. They found 20 features that hint at human-drafted text,including the presence of more complex ideas and varying punctuation marks. They also found longer paragraphs and more equivocal phrasing, marked by words like “but,” “however,” and “although.” The researchers used these data to train a machine learning package to distinguish between person and machine with over 99% accuracy. The tool is easy to develop, according to Desaire, so versions could be developed for specific fields of research.

Not everyone agrees that this effort is necessary. “I think the threat of authors submitting AI-generated text and secretly trying to pass it off as their own work has been greatly overblown,” says Michael King, a bioengineer at Vanderbilt University and editor in chief of Cellular and Molecular Bioengineering. He also questions whether detection tools will be able to keep up as AI models improve—though Desaire says the tool worked equally well on GPT-4, ChatGPT’s next iteration.

Advertisement

Article:

This article has been sent to the following recipient:

0 /1 FREE ARTICLES LEFT THIS MONTH Remaining
Chemistry matters. Join us to get the news you need.