Advertisement

If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)

ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.

ENJOY UNLIMITED ACCES TO C&EN

Ethics

Editorial: Tedium and creativity in AI ethics

by C&EN editorial staff
March 29, 2024 | A version of this story appeared in Volume 102, Issue 10

 

Artificial intelligence–generated figure of a rat with a huge growth coming from its belly.
Credit: Frontiers
Figure of a rat that appeared in a now-retracted article published in the journal Frontiers in Cell and Developmental Biology

In the past 2 months, two academic articles took the internet by storm. They weren’t shockingly well written, nor did they highlight paradigm-shifting science. Instead, scientists and the public alike were captivated by the authors’ blatant use of artificial intelligence. So blatant, in fact, that if the articles had been published this week, we’d have thought they were April Fools’ jokes. One showcased a ridiculously well-endowed rat drawn by Midjourney, and the other used the words of ChatGPT to craft the introduction.

Unfortunately, it’s no joke that these papers, and others like them, make it through the peer review process. These kinds of mistakes make us at C&EN wonder how AI will fit into the future of chemistry.

AI is a powerful tool, one that chemists have been using for decades. Researchers used simple machine learning programs 50 years ago for drug discovery. These days, more sophisticated versions can predict the 3D structures of proteins. One day, AI may be the foundation of entire labs. But chemists could be in trouble if we don’t establish some ground rules first.

There are, broadly, three classes of AI models: discriminative, predictive, and generative. The first is straightforward and simply learns to discriminate between different classes of data—think learning to tell the difference between benzene and pyridine rings. After learning the rules, discriminative models can appropriately sort new data into their respective categories. The outcomes depend on the quality of the training data, so models are susceptible to bias.

Both predictive and generative AI models take the discriminative model a step further, first sorting data and then identifying and learning from trends within the data. Predictive programs use those learned trends to, unsurprisingly, predict future outcomes. This is useful in designing drugs or predicting the products of a reaction. Generative models take what they’ve learned and create something new. ChatGPT and Midjourney are both generative: their primary function is to be creative for you.

A striking problem is that both these algorithm types suffer from hallucinations—they make stuff up on the basis of trends. Hallucination is not only worrisome for scientists looking for new drug candidates, but because programs are not necessarily trained on chemistry curricula, it can be troublesome for overburdened educators too. If students go to ChatGPT for answers instead of asking a teacher, they may find themselves misled. And if overloaded faculty turn to one of these programs to help write letters of recommendation, they might inadvertently say something nonsensical.

This is not to say that AI has no place in chemistry. We can look to calls in the arts for AI to enable productivity among creatives, not mimic creativity for tedious people. In chemistry, AI can help relieve the tedium of iterative calculations or parse complex datasets.

As AI algorithms become more sophisticated and more central to our lives both in and out of the lab, however, the chemistry community would be well served by preemptively establishing ethical and rational guidelines for their use.

Misrepresentation and fabrication are easy to identify as wrong. The challenge is to separate reducing the tedium of process from reducing individual creative deduction. But while separating these two issues, questions arise: When generative AI is prompted to write an introduction or make a figure, who is the original author? If a predictive model finds a breakthrough drug, who owns that intellectual property?

Such questions don’t have simple answers. But if we are to embrace the promise of AI to ease our laboratory burdens and expedite discoveries, we must also embrace the headache of establishing new rules for a new technology. Then we must train the next generation of chemists early to ethically and rationally use AI so they can deploy it to its full potential as a tool to ease the tedious parts of creativity. And maybe to make a joke every now and again..

This editorial is the result of collective deliberation in C&EN. For this week’s editorial, lead contributors are Fionna Samuels and Nick Ishmael-Perkins.

The views expressed are not necessarily those of ACS.

Article:

This article has been sent to the following recipient:

0 /1 FREE ARTICLES LEFT THIS MONTH Remaining
Chemistry matters. Join us to get the news you need.