Michelangelo’s depiction of the creation of Adam on the ceiling of the Sistine Chapel pivots on the iconic image of two fingers nearly touching—the finger of the creator and that of the creature he made in his image. That touch marks a crucial turn in the Genesis story: the beginning of a uniquely intelligent species on earth, though one that overreaches badly in the field of intelligence not much further on.
A seemingly inexhaustible inspiration for art and literature, that imminent touch, with its implication of trouble ahead, has become a metaphor for artificial intelligence (AI).
A machine endowed with the cognitive capabilities of its creator has been a goal in computer development since World War II. Rudimentary versions of mechanized human intelligence are now able to process data and analyze images that would overwhelm whole laboratories of human brains, and there is a general perception that the intelligent machine has arrived. With it comes enthusiasm as well as foreboding.
Both feelings are present in laboratories dedicated to drug discovery and development, where the jobs include compound screening, experiment design, image processing, and patient data analysis.
As in other industries that have practical experience with intelligent machines, researchers in the drug industry have taken off the table the prospect of robots making humans obsolete. Many drug researchers consider the technology an indispensable aid and enabler (see page 21).
AI, however, has also shown that adding decision-making and the ability to “learn” to a computer’s traditional number-crunching role is changing the work done by the research scientist. Uncertainty regarding the extent and nature of that change is the source of some anxiety. That’s certainly true for medicinal chemists.
Hugo Ceulemans, scientific director of discovery data science at Janssen Pharmaceuticals, is an advocate of AI.
“First of all, AI is never replacing the traditional researcher,” he says. Instead, the technology will allow chemists to focus more effort on innovative science.
“It is making more data available,” Ceulemans adds, noting that discovery labs are filling with quantities and varieties of important data that defy human processing for two reasons. “Humans have a limited capacity for dealing with data in decision-making. And it’s also really boring. Machines excel at it because they are not afraid of boredom.”
AI’s grunt work also opens windows in the discovery lab, Ceulemans says, by “asking researchers to be more open than they traditionally have been to more exotic solutions they might not have anticipated or ever thought of.”
And despite its data-parsing power, dot-connecting skills, and ability to improve performance as it is exposed to more data, AI in its current state falls well short of emulating human intelligence, Ceulemans says. He ranks current technology at the level of “idiot savant”: alarmingly good in a narrow field of endeavor, yet utterly naive outside it.
“True artificial intelligence would not only mean that a machine can learn but that it can also reason and actually decide,” he says. “Currently, what the machine does is boring, mind-numbing explorations and correlations, offering the scientist suggestions that are not set in stone. It is the scientist that makes the ultimate decision.”
But some researchers see a new world coming into focus, one where the line blurs between data scientist and traditional research scientist. It’s a world that will include chemists with a skill set that is not yet clear.
“As machine intelligence kicks in, we may end up with fewer chemists doing this kind of work,” says Derek Lowe, a drug researcher with experience at three large drug companies and a major biotech firm, referring to basic research chemistry and biology. “But they are going to be doing it at a much higher level.”
Lowe, who recently touched on AI in his In the Pipeline blog, notes that the technology has not yet kicked in. “The most useful and reliable use of machine learning in drug discovery is probably related to imaging,” he says. “Right now, we don’t have any machines to which we can say, ‘Hey machine, go find me a compound that will affect the so-and-so receptor so I can cure pancreatic cancer.’ ”
But such machines may be on the horizon, Lowe says, given the pace of AI evolution and the range of possible applications in the pharmaceutical lab. “I don’t see any reason why we are not going to be there.”
Indeed, the technology appears to be at a kind of tipping point, reflected in a recent spike in media coverage, much of it negative and hyperbolic. Michael Shanler, a research vice president covering life sciences for consulting firm Gartner Inc., says the mainstream media is largely to blame for positioning AI right now as simultaneously delivering the ultimate promise in computing and “the end of the world” as robots take over.
Understanding AI begins, Shanler says, with realizing it is composed of many technologies, creating a challenge in implementation. “There is a zoo of machine-learning approaches out there, which have a variety of computing requirements,” he says. Lab managers must take traditional science, which varies from lab to lab, into consideration when implementing AI systems. And traditional scientists need to configure AI systems to meet their needs.
“You need process owners, people who understand the science or business process,” Shanler says. “You need them to ask the right questions to guide the machine learning.”
That guidance, however, must steer clear of researchers’ bias. Just as a bad researcher is one who finds what he or she is looking for, there is a chance that an intelligent machine may operate as a mechanized bad researcher.
But implemented correctly, AI steers in the opposite direction of myopic research, according to Shanler. “That is where smart machines and AI kind of can excel,” he says. “They can deliver an unanticipated result, one that somebody might have overlooked because of their own bias. This is one of the promises of AI, and I’ve seen it with some of my clients.”
AI is making particular headway in the field of chemical synthesis planning. Chematica, chemical synthesis planning software developed by Grzybowski Scientific Inventions and recently acquired by MilliporeSigma, for example, passed the test of establishing a synthetic route to eight target molecules selected by MilliporeSigma and academic researchers in the U.S. and Poland.
CAS, a division of the American Chemical Society, which publishes C&EN, is also getting into the game. Last year, CAS licensed ChemPlanner, a retrosynthesis engine developed by the scientific publisher John Wiley & Sons. Later this year, CAS will introduce a version that incorporates a trove of new data, including its collection of human-curated reactions. According to Matthew J. Toussant, CAS’s senior vice president of product and content operations, CAS’s SciFindern will add 90 million reactions to the 2 million now available in ChemPlanner.
Wiley currently has 10 customers for ChemPlanner, all in the pharmaceutical industry. The partnership is an early move into the AI market for CAS, which is also developing programs for analyzing data clusters, neural networks, and other information with the goal of aiding chemical synthesis.
Meanwhile, AI is making inroads elsewhere. Earlier this year, Google reported that an AI algorithm it developed can scan patients’ eyes to predict heart disease, and researchers at Stanford University successfully put AI’s image-reading capabilities to work screening moles for melanoma.
On the other hand, one major AI project—the installation of IBM’s Watson for Oncology at the University of Texas MD Anderson Cancer Center—collapsed at the end of 2016 when the university system’s audit office pulled the plug, citing improprieties in the procurement process. News of the failed project, which had run up a price tag of $62 million by the time it was stopped, was a black eye for AI in drug discovery.
The university’s report on the project, however, stated in boldface type that the termination should not be taken as a positive or negative assessment of AI technology or IBM’s Watson computer, which is being used by Pfizer in immuno-oncology and elsewhere in pharmaceutical research.
As AI works its way fitfully into the drug laboratory, traditional tools—laboratory information management systems (LIMS), databases, and scientific instruments—are evolving to accommodate it. Thermo Fisher Scientific, a major supplier of laboratory IT, is pivoting from big data to AI, positioning its Platform for Science LIMS as a foundation for machine learning and other intelligent machine functionality.
Clarivate Analytics, a life sciences data and analysis company, added three AI engines as adjuncts to its MetaCore and Integrity data and analytics products over the past three years. And IBM continues to market its Watson intelligent computing system, including a version called Watson for Drug Discovery.
Executives at Thermo, Clarivate, and IBM agree that the drug discovery sector is at an early stage of experimentation with the technology, even as research labs feel out new roles for scientists interacting with technology.
“I don’t think organizations know yet how they are going to implement AI,” says Trish Meek, director of commercial operations for digital science at Thermo Fisher.
One of the first things companies are learning, she says, is that they need a well-managed database on which to build it. “Everyone knows AI offers opportunities, but they realize laboratory IT infrastructure doesn’t support it without a platform approach to informatics.”
That said, Jeff Noonan, business development director for Thermo Fisher’s digital science division, notes that IT platforms are becoming less monolithic—and researchers more empowered in their access to and use of data—partly because of cloud service software applications.
Noonan sees a shift from “a place where organizations require their own IT departments and their own coding departments to develop software applications to where the creation of those applications happens in the laboratory, giving scientists the ability to create solutions to address the needs of their specific laboratory.”
Clarivate is also working with drug companies that are looking for an entrée into intelligent systems, says Roger Willmott, the firm’s vice president of technology. “They are using AI in modeling, but it is at an experimental state.” Data quantity and quality are interrelated hurdles. “Researchers clearly have a lot of data,” he says. “They are all trying to work out how they can use it.”
Louisa Roberts, a life sciences executive at IBM Watson Health, adds that data, once processed by an intelligent machine, need to be presented in a clear and navigable visual format. Data visualization, she says, needs to be customized by researchers to meet the specific needs of their labs.
Acknowledging that shuttering the MD Anderson project dealt a blow to AI, and Watson in particular, Roberts says IBM is hoping to showcase results achieved by other organizations putting the technology to use. Pfizer is a major showcase at the moment.
John Gregory, director of R&D project and portfolio management at Pfizer, outlined the company’s immuno-oncology research with Watson at the recent Molecular Medicine Tri-Conference in San Francisco, an annual event focused on technology in drug discovery and development.
Gregory described Watson as a Pfizer-trained cognitive engine that interprets millions of publicly available documents as well as Pfizer’s internal data to help identify potential targets and indications. Sources include medical abstracts, full-text journals, medical databases, patents, drug labels, and Pfizer toxicology data reports.
Gregory said Pfizer hopes to use Watson for purposes such as tumor identification, validation of gene targets, and identification of novel gene sets associated with immune response.
Drug companies are also pursuing AI on other avenues. Several of them have joined a new consortium called Machine Learning for Pharmaceutical Discovery and Synthesis. Hosted by Massachusetts Institute of Technology, the group seeks to replace labor-intensive trial-and-error work in molecule synthesis with a computational reaction-design process.
Regina Barzilay, a computational scientist at MIT, has worked on AI projects with consortium members in the Cambridge, Mass., area. “I can tell you that they totally realize it is a huge place of opportunity,” she says. “They are trying to learn, and they are fast learners.” Barzilay sees AI as a true turning point in technology, one that will free scientists from data drudge work and repetitive experimentation.
It is early days, however, and most discovery science researchers say they have seen only the first flashes of intelligent life in laboratory data analysis systems.
“Here everything is pretty manual,” says a medicinal chemist at a major biopharmaceutical firm, who asked for anonymity because he’s not authorized to speak publicly on the subject. He says researchers at his company are not concerned that AI will take over their jobs, but he does anticipate some pushback as it inevitably encroaches.
“There is concern about the human element. I think medicinal chemists value that most highly. Personally I think some of them value it too highly,” he says. “Medicinal chemists are a little reluctant to adopt these things that take them out of the decision-making or idea-generating process.”
Ashutosh Jogalekar, a computational chemist at Revolution Medicines, has also yet to engage with an intelligent machine at work. “We don’t really deal with big-data issues,” he says. But, he adds, the technology clearly has potential to accelerate discovery at the biotech start-up.
“Even in a fast-paced drug discovery environment, it takes one week to get data back when you give the scientist a compound to test,” Jogalekar says. “Maybe AI can have an impact on analyzing the results.”
Other areas where Jogalekar sees possible change include reaction planning, analyzing results of phenotypic screens, and similar lab operations. “But the technology isn’t good enough yet or easy enough to use,” he says, citing the need for improved data visualization in AI systems. Jogalekar adds that scientists in his lab are looking forward to seeing the technology improve. “From the conversations I’ve had, at least for now, they are welcoming.”
The biopharma chemist agrees. “I place a lot of value in this stuff,” he says. “I would like to see a lot more of it, and I would like to see it improve.”
Lowe, who has decades of experience as a medicinal chemist, says there is little concern in discovery labs about smart machines turning traditional human scientific endeavor into an automated commodity. Automation is nothing new, he says, and science abides.
“There are big chunks of stuff now that are done as a kind of science as a service that used to be bespoke areas of research,” he says. “Sequencing DNA, collecting NMR data—if you want to go back further, LC mass spec data; now, these are all walk-up machines. There has always been a tendency of things going from cutting edge to difficult to easy to automated.”
The world in which one asks a machine to make a compound is upon us, Lowe says. There is little question that smart machines will gain traction. In effect, resistance is futile, according to Lowe.
“These machines are pretty good, and they are getting better,” he says. “We’re not.”
CORRECTION: This story was updated on April 2, 2018, to correctly reflect the IBM system installed at University of Texas MD Anderson Cancer Center. It was IBM’s Watson for Oncology, not Watson for Drug Discovery.