Advertisement

If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)

ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.

ENJOY UNLIMITED ACCES TO C&EN

Informatics

Perspectives: Augmented intelligence

Smart machines may run the lab of the future, but not without human intervention

by Nick Lynch
April 2, 2018 | A version of this story appeared in Volume 96, Issue 14

 

A statue of Alan Turing walking with books held in one arm.
Credit: Shutterstock
Statue of Alan Turing at the University of Surrey.

The swift rise of artificial intelligence (AI) stokes a primal anxiety: the fear that a machine, in this case a computer system able to “learn” by adapting its algorithms and functionality according to data input, will replace humans at work. Arising repeatedly with the advent of significant technologies for as long as we’ve been able to conceive of machines, human obsolescence anxiety even fostered a great American folktale. We all remember John Henry, the “steel-driving man” who challenged a machine, won the contest, but died of exhaustion from his effort.

Earlier this year, world leaders who gathered at the World Economic Forum in Davos, Switzerland, spoke to a global audience about the importance of “ethical” adoption of AI, referring to concerns about job loss due to increased automation. Recent news coverage of the digital aspects of social media marketing engines and malicious email hacking have positioned AI as a growing threat. One could be forgiven for thinking that the smart machine emerged as a force in just the past couple of years, given the attention it has received from government, business, and the media.

An algorithm capable of replacing a human scientist is not a realistic nearterm proposition.

In fact, AI has a much longer heritage. Machine learning (ML), the technique by which computers adapt their operations through exposure to their environments via data processing, has been in existence since 1952, when Arthur Samuel’s computers at IBM used the technique to improve at playing checkers. In 1947, Alan Turing, recognized one of the founders of artificial intelligence, gave a speech in London beginning, “What we want is a machine that can learn from experience.” Turing had by that time made a name for himself by employing an electromechanical machine to crack Germany’s Enigma encryption, helping the Allies overcome the Nazis in crucial military engagements.

 

Field intelligence

From a survey of 374 biotechnology and life sciences research and IT managers conducted by the Pistoia Alliance in 2017

44% of respondents are using or experimenting with AI.

46% of current projects take place in early discovery or preclinical research.

15% of projects take place in development, including clinical research.

23% of projects involve target prediction and positioning.

13% of projects involve biomarker discovery.

8% of projects involve imaging analysis.

5% of projects involve patent stratification.

30% of those not using AI cite lack of technical expertise as the most critical barrier.

26% of nonusers cite poor data quality as the most critical barrier.

24% of nonusers cite access to data as the greatest barrier.

AI’s advance in the arena of drug discovery research and development has mirrored its broader trajectory, cruising quietly beneath the radar for years suddenly to fly up onto the screen. A 2017 survey by the Pistoia Alliance found that 44% of IT managers in the life sciences are now using, or experimenting with, AI. The sector is aware AI is here to stay, with digital technology positioned to transform science going forward.

Some hand-wringing is to be expected. But what is typically ignored in dystopian portrayals of AI is that an algorithm capable of replacing a human scientist is not a realistic near-term proposition, particularly when we look at the complexity of the problems addressed in the field of science and discovery. Curing cancer, finding effective treatments for Alzheimer’s disease, seeking new methods of carbon capture, and pioneering alternatives to lithium-ion batteries involve research efforts that differ fundamentally from the kinds of tasks with discrete sets of outcomes or solutions that are ripe for automation. They require interdisciplinary expertise and judgment that a machine cannot provide.

In his London speech, Turing noted that human expectations of machines must be realistic and that machines will need to work alongside humans. “No man adds very much to the body of knowledge; why should we expect more of a machine?” he asked. “The machine must be allowed to have contact with human beings in order that it may adapt itself to their standards.” The reality is that as AI continues to develop and improve, the way humans interact with computers will become far more nuanced. This will lead us to an era less of artificial intelligence, and more of augmented intelligence.

Augmented intelligence is an application of AI that guides a human to reach a decision rather than making the final decision for the human. In addition to ML, it includes deep learning , whereby computers operate according to evolving data representations as opposed to preprogrammed task functionality, and natural-language processing (NLP) systems, which introduce speech recognition and other forms of natural linguistic “data” to AI.

These terms may sound alien even to scientists, but the techniques are already being used to help researchers make decisions on broad questions. ML and NLP, for example, have been used for some 20 years in computers that analyze patent literature and make recommendations to their human “masters.” Similarly, AI techniques adopted from tech giants such as Google, Microsoft, or Amazon have led to the increased use of AI for image recognition purposes. Google’s recent news of how its AI application, DeepMind, is assisting with cancer diagnoses after analyzing more than 1 million eye scans from Moorfields Eye Hospital, illustrates how AI analysis can help in real terms in health care.

Functions such as recognizing images or comparing patent literature to known terms have a finite number of outcomes according to a set of rules. In such cases, AI works alongside humans, reducing much of the drudge work and improving efficiency while giving scientists more time to focus on innovation. However, when we look at drug discovery and development, there is a great deal of science that we don’t yet know, science that no AI system has the capability to take over from humans. What’s more, there are no right or wrong answers in experimentation; the input of human nuance, inference, and empirical evidence is needed to understand results and gain insights.

Drug discovery requires a fundamental knowledge of the derivation of an answer and the reasoning behind it. Such knowledge is important when it comes to seeking approval for a compound—regulatory review requires researchers to produce comprehensive documentation of experimental result. Such information is also crucial to scientists who need to trust the answers that a machine offers. A black box that simply spits out numbers won’t do. It is quite often the case that laboratories will tone down the predictive strength of an AI algorithm so the human user can better understand the outcome. This is the role augmented intelligence will play in the future of complex R&D. AI systems won’t represent the sole decision-making approach but will instead supplement and guide what human scientists want to explore.

Beyond assisting researchers at the bench, AI will have a transformative impact on lab safety and overall operations. Take, for instance, a pair of smart glasses—augmented-reality lenses that a chemist or scientist can wear in the lab. Perhaps the chemist is about to perform an experiment for which she or he is inappropriately trained or at risk of incorrectly handling a hazardous substance. Maybe the laboratory isn’t set up for the job—maybe it lacks a category 3 biosafety rating required for examining a biological sample. In these instances, the glasses can alert the scientist and prevent an accident.

Advertisement

Smart glasses might give the user approaching a laboratory instrument instructions on how to operate it or news on what has recently been updated on the equipment. It could greatly improve predictive maintenance, resulting in smart labs that know when machines need repair. AI that analyzes the outputs of lab machines compared with benchmarks or analyzes the background (meta) information that machines produce can alert scientists if the machine is not performing as efficiently or accurately as it should be.

There is huge potential for AI to enhance research. It is important for scientists to understand how the technology could change their jobs—not from a protectionist stance, but with a focus on how they can benefit from technological advances launched more than 70 years ago when Turing’s work with machines likely shortened the war in Europe by more than two years and saved over 14 million lives. Researchers may yet avoid John Henry’s Pyrrhic victory by embracing the potential of the machine, putting down the hammer, and going to work with a sleek, new tool.

A photo of Nick Lynch.
Credit: Courtesy of Nick Lynch

Nick Lynch is cofounder of the Pistoia Alliance, an association of pharmaceutical research informatics professionals and system vendors, and currently heads AI strategy development for the organization. He has 20 years of experience in informatics, 13 at AstraZeneca, where he led research teams and managed global systems integration in clinical and preclinical research.

Article:

This article has been sent to the following recipient:

0 /1 FREE ARTICLES LEFT THIS MONTH Remaining
Chemistry matters. Join us to get the news you need.