Advertisement

If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)

ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.

ENJOY UNLIMITED ACCES TO C&EN

Informatics

New AI systems will emerge as labs tighten the connection between computer and robot

And biotech firms will start to learn the fate of drug candidates discovered with AI help

by Rick Mullin, special to C&EN
January 19, 2024 | A version of this story appeared in Volume 102, Issue 2

 

Two researchers in white lab coats and goggles regard the robotic elements of a lab of the future enabled by artificial intelligence.
Credit: Exscientia
Exscientia recently opened a laboratory in Oxfordshire, England, dedicated to integrating artificial intelligence and robotics.

After a year that began with a flourish of generative artificial intelligence and ended with the first government regulations on developing such systems, 2024 will see an unimpeded advance in the technology in drug and materials research.

Takeaways

Artificial intelligence systems combining multiple large language models will advance in drug discovery.

The connection between AI and robotics will tighten in discovery laboratories.

An increased emphasis on the ethical use of AI will follow a European Union directive.

Experts agree that the year ahead will feature the rapid deployment of advanced generative AI architectures called agent systems. They combine multiple large language models in platforms capable of determining, on their own, which databases or tools to query in solving complex problems. They will move forward most rapidly in drug research, according to Marinka Zitnik, a professor of biomedical informatics at Harvard Medical School.

The interaction of AI and robotics in fully automated laboratories will advance as well, as installations start closing the execution gap—the aspects of lab operation that still require human intervention between computer and robot.

Exscientia CEO Andrew Hopkins says deploying fully automated labs tops the agenda at his firm, a leader in AI-based drug discovery. The objective is to cut the time needed to complete experiments, Hopkins says. Exscientia recently opened a 3,250 m2 building in Oxfordshire, England, dedicated to integrating AI and robotics for chemical synthesis and closed-loop drug design and discovery.

“Next year will be the year of clinical readouts. The year of ultimate truth.”
Alex Zhavoronkov, CEO, Insilico Medicine

AI-enabled biotechnology companies with their own drug pipelines have begun to shelve some development programs and shift resources to lead candidates. It “will be the year of clinical readouts,” says Alex Zhavoronkov, CEO of the AI-based drug discovery firm Insilico Medicine. “The year of ultimate truth.”

Zhavoronkov notes that the leaders in AI-based drug discovery are positioned to score in the year ahead. He says investors have become more sophisticated in their assessments of AI, focusing on companies that deliver high-quality clinical drug candidates at top speed.

AI and lab automation are also at the forefront of digitalization at bigger life sciences and chemical companies.

“In the past year, we’ve been putting in a more deliberate innovation strategy,” says Karen Madden, chief technology officer for the life sciences business of Merck KGaA. Pursuit of the “lab of the future” is one of the top priorities for her group, she says.

“We’re starting to deploy technology in a more deliberate and strategic way,” Madden says. This includes the use of a generative AI system the company calls myGPT. “Our organization started embracing and experimenting with it, but we needed to put in guidelines and direction before it got too Wild West or out of hand,” she says.

International efforts to establish ethical guidelines for AI system development and deployment will advance in 2024 in the wake of a European Union directive last month and the Joe Biden administration’s executive order in October requiring AI system developers whose projects may pose security risks to share safety test results with the US government.

Risto Uuk, EU research lead at the Future of Life Institute, which seeks to guide technology toward human benefit, says he was encouraged by a November ethics summit in England that notably included the US and China. Follow-up meetings are scheduled for South Korea and France this year.

And the focus on ethics in AI labs will grow this year. While some sources suggest that closing the gap between AI and robots poses a security threat, Harvard’s Zitnik says rogue labs are a long way off. “I’m not worried so much in 2024 that models will reach a level of maturity to work autonomously,” she says. “I would not expect there would be labs working in an unmanned, uncontrolled manner to make dangerous molecules.”

Zitnik sees open access to generative AI models as a more immediate concern. “The barrier to using those models and coupling them to experimentation is getting lower and lower,” she says. “It is getting easier and easier, without requiring someone necessarily to have a PhD in chemistry or biochemical engineering, to start using those models, coupling them with experimental workflows. And that’s where a potential short-term risk might arise.”

Article:

This article has been sent to the following recipient:

0 /1 FREE ARTICLES LEFT THIS MONTH Remaining
Chemistry matters. Join us to get the news you need.