Advertisement

If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)

ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.

ENJOY UNLIMITED ACCES TO C&EN

Business

Shooting For A Quantum Leap

Computational chemistry is reemerging as a viable means of accelerating drug discovery, but it has a long way to go

by Rick Mullin
January 25, 2016 | A version of this story appeared in Volume 94, Issue 4

 

TECH TRANSFER
[+]Enlarge
Credit: Nimbus Therapeutics
Graphics processing unit technology developed for video games can run chemical computations, such as this model of a drug-protein interaction.
A computer model of a protein-drug interaction.
Credit: Nimbus Therapeutics
Graphics processing unit technology developed for video games can run chemical computations, such as this model of a drug-protein interaction.

On Nov. 7, 2013, Bill Gates posted the following on Twitter: “Schrödinger Inc.’s engineers and scientists are doing impressive work. Fight disease with code.”

The online boost was a big deal to a company whose purview is a melding of computing and health care, Gates’s two great concerns in life and business. And the $30 million that Gates invested in Schrödinger was an even bigger deal.

But what may be most interesting is that the computing visionary was not pointing to anything new. Rather, Gates’s show of support coincided with a renaissance in the application of computational methodology to the daunting task of discovering new drugs.

Since that tweet, Schrödinger and other companies applying computational chemistry in pharmaceutical labs have won even more love. Compounds discovered via code are advancing to the clinic, investors are coming forward, and new partnerships with traditional pharma companies are being announced.

Yet computational chemistry assumed center stage in the drug industry once before. In the 1990s, purveyors of the technique pulled in with a big truck and big promises. Like combinatorial chemistry around the same time, the technology ended up disappointing and slid into what in business parlance is termed a valley of despair.

Now, as advances in computing allow computational chemistry to again gain traction in pharma labs, practitioners are looking back at its years in the valley, noting what has changed and what has not.

One view is that although the methodologies of 20 years ago are still at the core today, technology advances in arenas such as gaming and video streaming have improved computational techniques for drug discovery. Others say it is the research community that has evolved, becoming more open to computational chemistry. Everyone agrees that being able to deliver user-friendly laptop tools to benchtop chemists is a big part of the new interest.

Computational chemistry is advancing at software and service providers such as Schrödinger and in the labs of major drug companies. But there is also a handful of start-ups making names for themselves, many led by scientists with computational experience from other industries.

Verseon, a company using its own computational methods to facilitate drug discovery, went public last year, although it has been around since 2002. Its founders are credited with developing compression technology used by Netflix and other firms to stream video online.

Verseon hopes to bring an anticoagulation therapy to the clinic this year. It’s also working on a therapy for diabetic macular degeneration and an angiogenesis inhibitor for solid tumors.

Chief Executive Officer Adityo Prakash has been with Verseon since its founding. Looking back at computational chemistry’s years in the woods and recent reemergence, he cautions that the sector still faces many challenges in bringing computer-based drug design up to speed.

“At first there were big promises: People coming along saying they will solve everything in drug discovery,” Prakash says. Software vendors in the early 1990s assured researchers that the computer design techniques applied in industries such as architecture and aeronautics would advance quickly in pharmaceutical labs. “Of course, the people making these things forgot to mention that there is still a lot of science that needed to be solved,” he says.

Computational chemistry was banished to the valley of despair, Prakash recalls, “but now you’re seeing a resurgence as people forget the disappointment a little and as a new generation of people begin showing interest.”

Much of that interest is carried on the back of advances in graphics processing unit (GPU) technology, which is now doing for computational chemistry what it did for video games and other image-intensive applications.

Machine learning, often used to determine which ads to display to users on the basis of information a website has gathered about them, also has applications in chemistry and biology. Laboratories swamped in data eagerly welcome these advances in computing power.

“That’s all good,” Prakash says. “But if you think that will provide you with that fantastic world of answers to everything, it is obviously not going to do that. There are fundamental challenges to what machine learning can do.”

Researchers still must identify drug targets using data from previous experiments before deploying computational engines to screen compounds. From there, traditional chemistry and biology take over as compounds move into the clinic.

Over the past 13 years, Verseon has developed what it calls its Molecule Creation Engine, a platform that generates virtual, druglike, chemically diverse, synthesizable molecules in numbers it claims are far in excess of the synthesized compounds in corporate compound libraries. Its Molecule Modeling Engine then brings huge, cloud-based computing power and physics algorithms to bear in parsing this virtual library.

The firm’s technology is proprietary and not offered as a service. “In the financial world, if you figure out algorithms that systematically beat the market, you wouldn’t sell them as a tool for someone else to make money,” Prakash observes. “You would build your own hedge fund. The only reason people sell tools in this space is because they don’t work.”

Verseon’s approach represents one end of a business model spectrum in computational drug discovery that includes firms offering their technology as a service and others that sell their computational engines as products.

Atomwise, launched in 2012, is a technology service firm that works through partnerships, counting Merck & Co. and Dana-Farber Cancer Institute among its clients. The firm is currently working on a drug repurposing program for Ebola with virologists at the University of Toronto. It recently landed a $6 million investment from a group of five venture capital firms led by Data Collective.

Abraham Heifets, Atomwise’s CEO, says the resurgence in computational methodologies is propelled by results. “People in drug discovery are by and large scientists, and they like to have evidence. The burden of evidence is on the technologists, on us, to show that there is really new value that can be provided,” he says. “Fortunately, in this industry, scientists are willing to be swayed by data.”

But the technologists face intense challenges in drug discovery, says Heifets, formerly a software engineer at IBM. “When we talk about using computers to design buildings and airplanes, these are useful analogies,” he says, “but buildings and airplanes were designed by people to be understood by people. Biology is more difficult. It was not designed to be understood by people.”

Atomwise developed its core technology, AtomNet, on the principle of deep-learning neural networks, a kind of advanced machine learning modeled on how the human brain solves complex problems, according to Alexander Levy, chief operating officer. The concept, decades old, has emerged in drug research in recent years thanks to new algorithms and an incredible rise in computing power, he says.

That power is needed, notes Kenneth M. Merz Jr., a chemistry professor and director of the Institute for Cyber-Enabled Research at Michigan State University and editor-in-chief of the American Chemical Society’s Journal of Chemical Information & Modeling. Because of the complexity of biology, computational systems have to deal with a huge number of bonding configurations.

Adding to the computational burden, the energy calculations involved have “all the quantum weirdness you don’t have to worry about in computer-aided design in other applications,” he says.

Merz, who is also the chief scientific officer of QuantumBio, a developer of computational tools, points to free energy perturbation (FEP), a computational model that uses statistical mechanics to compute the free-energy difference between two bound drug molecules.

Developed in the 1950s and first applied to drug discovery in the 1980s, FEP flopped in many cases because it didn’t have the computing power to use force fields to calculate accurate bonding interactions. But it is making a comeback.

“This method has resurfaced recently largely through Schrödinger’s sales and marketing effort,” Merz says. “They are taking the stuff from the early ’80s and adding some bells and whistles, addressing the sampling problems and force-field inadequacies, and showing it to the major pharma companies.”

Some of those bells and whistles, especially high-speed GPUs, are doing the trick. “Twenty years after FEP was supposed to solve the world’s problems,” Merz says, “something is being delivered that has a chance.”

TRANSPARENCY
[+]Enlarge
Credit: Schrödinger
Schrödinger’s Farid insists that new computational models should be made public to avoid hype.
Man at a desk, Ramy Farid of Schrödinger, with a laptop computer.
Credit: Schrödinger
Schrödinger’s Farid insists that new computational models should be made public to avoid hype.

Ramy Farid, president of Schrödinger, says the drug discovery enterprise can thank kids playing video games for those game-changing GPUs. And Schrödinger can thank Gates, whose investment in the company allowed it to make significant scientific breakthroughs. The money supports more than 100 developers using huge computing clusters that have expanded force-field models from covering about 3% of chemical space in the 1990s to 98% today.

“FEP is nothing new,” Farid says. “What you need are accurate force-field simulations for this to work. You need a good sampling of systems and advanced molecular dynamics simulation.”

The fastest central processing units (CPUs) that were standard for supporting computational chemistry in years past were woefully inadequate. GPUs today are about 50 times as fast as current CPUs, Farid says, fostering FEP techniques that support the manipulation of molecules while still controlling potency and other key characteristics of drug activity.

Schrödinger has been adding functions and features to FEP modeling, Farid says. The company draws from a huge database of protein structures and ligands, all garnered with computational power that far exceeds that of the major drug companies combined, he claims.

That could change. “The really cool thing is that I don’t think that will be true soon,” he says. “Pharma companies are now investing in hardware and reassessing the cloud.”

Unlike a firm such as Verseon, Farid says, Schrödinger is not proprietary in its development of computational chemistry technology. “We are publishing everything. I think this is important to the community,” he says.

Companies developing proprietary systems in the early days led to some of the overselling of computational chemistry, according to Farid. “Schrödinger has a huge number of academic partners, and our details are published. The only way you can do this is to be transparent. We can’t go off on the hype curve again.”

Schrödinger also has industry partners, including Sanofi. Last year, the two companies signed an agreement, worth up to $120 million, under which Schrödinger will provide computational design technology for 10 drug discovery programs at Sanofi.

Sanofi-Schrödinger research teams have been formed around specific targets, and work has just begun, says David Aldous, who heads lead generation R&D in Sanofi’s Boston laboratories. But Sanofi knows Schrödinger well, having purchased a half-dozen Schrödinger computational products over the years, Aldous says, notably FEP+ and WaterMap, recent introductions that deal effectively with the huge amounts of data entering the lab.

NEXT GENERATION
[+]Enlarge
Credit: Nimbus Therapeutics
Nimbus’s Kapeller gets first crack at computational models in development at Schrödinger.
A woman, Rosana Kapeller, gestures expansively as she speaks, standing, in front of a white board marked up with many equations.
Credit: Nimbus Therapeutics
Nimbus’s Kapeller gets first crack at computational models in development at Schrödinger.

Schrödinger does not have an in-house drug discovery program. However, it is part-owner of Nimbus Therapeutics, a drug discovery company that uses Schrödinger’s products to vet compounds. Launched in 2010, Nimbus has a nonalcoholic steatohepatitis therapy in Phase I trials and a preclinical IRAK4 interleukin I receptor it is developing with Genentech. Nimbus also has a crop science partnership with Monsanto.

Nimbus works like other small drug discovery companies but turns to Schrödinger for computational work that takes the place of high-throughput wet chemical screening, according to Rosana Kapeller, chief scientific officer. Nimbus also provides a proving ground for Schrödinger and has been a first-time user of new products.

“Before we think about doing any work with a target,” she says, “Schrödinger throws the kitchen sink at it in terms of its computational tools, to understand if these tools would be transformational in driving the discovery process for that particular target. Everything we do with them is next-generation.”

Although the in silico origins of the IRAK4 compound are incidental to Genentech, the biotech firm has been using computational methods in-house for more than a decade, according to Jeff Blaney, director of computational chemistry and chemical informatics.

Blaney downplays the notion that computational chemistry ever fell into a valley of despair, instead seeing it as making steady, incremental advances. He agrees with software developers that a big limiting factor has been system speed, and he credits advances in GPU technology for getting computational chemistry on the bench at Genentech.

Advertisement

“With just a laptop, we can deliver to a scientist’s desk very good molecular graphics, visualization, and modeling tools,” he says. Previously, this work was done by computational specialists such as Blaney himself. “What has really changed over the last 10 years—the last five years, for sure—is that we are putting good power in the hands of the scientists that actually run the experiments.”

The tools still have a way to go, Blaney cautions. “We have done a lot of work with FEP. I’ve done it at Genentech. I have been involved with groups doing it going back to the 1980s, and now we are learning where the sweet spot is.” Today, he says, it is possible to predict the potency of a new molecule derived from changes to a few atoms in a previously known molecule. “But it is still not as broad as I would like.”

Ideally, he says, researchers would have computational methodology that would allow them to dock a structurally novel small molecule they’ve designed to a high-resolution crystal structure and predict its potency.

“Now, you can’t get anywhere close to that,” he says.

Software developers generally would agree. But executives such as Atomwise’s Levy are optimistic that the technology will continue to move incrementally as it pans out in the lab. “We are in a field that in the fullness of time will respond to new information,” he says.

Article:

This article has been sent to the following recipient:

0 /1 FREE ARTICLES LEFT THIS MONTH Remaining
Chemistry matters. Join us to get the news you need.