Scientists are already exploring using ChatGPT and similar artificial intelligence models in writing research papers, reviews, and grant applications. But how these bots will affect scientific publishing is unknown, says Heather Desaire, a chemist at the University of Kansas. “Is this just augmented spell-check that’s going to make people’s life better, or is this going to destroy scientific publication by adding in a bunch of misinformation?” she asks.
That’s why Desaire and her colleagues have developed a simple tool that can help explore the technology’s impact on academic publishing (Cell Rep. Phys. Sci. 2023, DOI: 10.1016/j.xcrp.2023.101426). To make it, they compared review articles written by scientists with scientific writing produced by ChatGPT. They found 20 features that hint at human-drafted text,including the presence of more complex ideas and varying punctuation marks. They also found longer paragraphs and more equivocal phrasing, marked by words like “but,” “however,” and “although.” The researchers used these data to train a machine learning package to distinguish between person and machine with over 99% accuracy. The tool is easy to develop, according to Desaire, so versions could be developed for specific fields of research.
Not everyone agrees that this effort is necessary. “I think the threat of authors submitting AI-generated text and secretly trying to pass it off as their own work has been greatly overblown,” says Michael King, a bioengineer at Vanderbilt University and editor in chief of Cellular and Molecular Bioengineering. He also questions whether detection tools will be able to keep up as AI models improve—though Desaire says the tool worked equally well on GPT-4, ChatGPT’s next iteration.