News last month that Illumina, a genome-sequencing technology firm, had gotten the price of sequencing the full human genome down to $1,000 was hailed as a great leap forward for drug research. Low-cost sequencing is considered crucial to the medical breakthroughs promised by the initial decoding of the human genome in 2000. Such breakthroughs are already occurring in cancer research and elsewhere in the form of targeted therapies—drugs designed to work on patients with specific genetic attributes.
The drive for cures tailored to an individual patient’s biology, known as personalized medicine, also relies heavily on genomics research. Technological advances and successes with new drugs have bred optimism among drugmakers and regulators that the world has entered a new age in medical research.
“I am here to declare victory, the coming of age of this vision, this technology,” Janet Woodcock, head of the Food & Drug Administration’s Center for Drug Evaluation & Research, told attendees at an event sponsored by the Personalized Medicine Coalition last May. “Targeted therapies have reached the mainstream.”
Many industry watchers, however, believe it is way too early in the game to be declaring anything like victory. Despite the growing number of drugs known to work with subsets of patients, critics see little headway on the ultimate vision of tailoring therapies to individual patients. Drug discovery efforts, they say, have been too narrowly focused on genomics, without enough regard for the patient information—phenotypic data—that informs about environmental influences.
The process of moving personalized therapies from the discovery lab to the clinic is even more problematic; some observers describe it as a disaster. The traditional method of enrolling large numbers of patients in trials is mismatched, they say, to the mechanics of genomics-based medical research. Meanwhile, insurance companies say they are entirely at sea when it comes to determining how diagnostic tests that support targeted therapies are to be reimbursed.
The level of frustration in some quarters is stoked by the slow uptake of digital devices that can monitor patient health from afar. Smartphones and other devices are capable of supplying reams of immediately useful data on patients, providing much-needed context to the far-more-confusing reams of genomics data on hand, according to Bernard H. Munos, founder of the InnoThink Center for Research in Biomedical Innovation.
“There has been a sort of disappointment in the speed with which this whole thing is moving forward,” says Munos, a former R&D strategy adviser at Eli Lilly & Co. In recent years, he says, researchers have uncovered a far greater interaction between the genome and the environment than had been assumed when genomics launched in drug laboratories more than 15 years ago. “I don’t think we can have personalized medicine without a lot more data about the patients themselves,” Munos says.
Lacking these data, researchers have no access to a patient’s presymptomatic history and other information that provide context for genomic data. The emergence of biosensors, telemetry, and cloud computing, according to Munos, has created a means of gathering, storing, and transmitting data that is useful when a patient is diagnosed with a disease. “We can access those data, play them in reverse, and start seeing where things got off track,” he says.
George Poste, codirector of the Complex Adaptive Systems Initiative (CASI), an effort at Arizona State University to develop interdisciplinary research in health care, sees the work in genomics moving toward what he calls “precision” medicine—the development of therapies targeting patient subsets—rather than the individual patient focus implied by personalized medicine.
Poste is concerned about the absence of phenotypic data in clinical research and, like Munos, advocates greater use of digital technology for monitoring patients and collecting health information. He is also critical of the lack of standards for developing biomarkers, which are traceable substances used to measure a biological state, and the widespread failure to get diagnostics into the clinic. He cites critical disconnects along the way from drug discovery to the marketplace.
“We have this asynchrony between complex science, increasing regulatory oversight, and reimbursement,” he says. Much of the problem centers on how data are managed at the beginning of the trek. “We have poorly curated data from research, incompatible data sets, interoperability problems just in the discovery arena,” he says. “How do you migrate that into the system that you use in clinical trials? How do you integrate that with the health care payment system?”
Sources across the spectrum from drug discovery to reimbursement recognize dysfunction in the translation of genomics research to approved drugs. “I would represent the ultimately frustrated person in this regard,” says Anna D. Barker, president of the National Biomarker Development Alliance (NBDA), which was launched last month.
Barker, who is Poste’s codirector at CASI, questions the basis for deciding which biomarkers to pursue in drug discovery. “A lot of biomarkers have no real clinical utility, but they are interesting biologically,” she says. “If you ask a good clinical question and your interest is in the clinical utility, then you have thought much differently than someone that has just gone on a discovery exploratory journey.”
NBDA, whose members include drug companies, research organizations, and patient groups, wants to develop better standards for establishing biomarkers in an “end-to-end” system encompassing research and clinical development, according to Barker. This will require researchers to lift their heads from the genome, she says.
“It’s about having deep phenotypic data on patients,” she says. “That has been a loss in this whole odyssey we have been going through in genomics.”
NBDA will use the research enterprise’s 15 years of work on biomarkers in establishing standards—“It’s not like we haven’t been thinking about this stuff,” she says—while keeping an eye out for gaps in the science. The key will be having enough of the right kind of data. “Just having people’s genomes is not enough,” Barker says.
Tomasz Sablinski is a former Novartis researcher who left the drug company in 2008. In 2010, he launched Transparency Life Sciences, a drug development firm. Sablinski is also unhappy with how genomics is advancing into the clinic. “It’s a catastrophic failure,” he says. He describes genomics as purely academic unless it translates to benefit for patients.
And he sees a fundamental disconnect between research and the clinic. “All of this genomics has advanced pretty much at the same speed as the rest of the world in technology advances—cheaper, faster, more robust,” Sablinski says. “At the same time, clinical trials are done in exactly the same way as they were in the ’80s … in the ’60s if you stretch it.”
The clinical model that Transparency favors leverages digital devices that connect patients to clinics and data to researchers, Sablinski says. “We saw an opportunity to move things into the 21st century—a crowdsourcing idea.”
Transparency is currently using its approach to clinical trials in a pilot study with Genentech in the area of inflammatory bowel disease. It’s also partnering with researchers at the Icahn School of Medicine at Mount Sinai to assess the use of metformin, a widely used diabetes drug, to treat prostate cancer. The firm, working in partnership with Stanford University, recently received clearance from FDA to begin Phase IIb trials of lisinopril, a blood pressure medicine, as a treatment for multiple sclerosis.
If clinical trials lag the science of personalized medicine, the protocol for health care reimbursement may be even further behind. Insurance firms are unable to determine which diagnostic tests are covered or even ascertain which have been given, according to Michael Kolodziej, director of oncology strategy at the insurance giant Aetna.
“The truth is that what we are doing right now is mostly getting internal content experts together to set up ideas as to how we should evaluate and potentially modify our current evidentiary process, because that drives coverage,” Kolodziej says. “The problem is that the rate of change in personalized medicine is so fast that we are questioning whether or not we need to have another route of evaluation.”
Reimbursement for molecular diagnostics—tests that use biomarkers to monitor disease or detect risk—poses a problem because the insurance industry doesn’t know how to evaluate them, he says. “It would be great if we had proficiency testing,” Kolodziej says. “Traditionally, if a doctor orders a test, he can presume the test is done right. Unfortunately, I don’t think we can say that’s necessarily true today in the era of molecular testing. We honestly don’t know what people are doing.”
Insurers say they require guidance from regulators on the analytical validity of diagnostic tests. The insurers were completely without guidelines prior to January 2013 when the American Medical Association issued a standard code for some molecular diagnostic tests. “That was a step in the right direction, but the rate of change is so fast that the codes cannot keep up with the technology,” Kolodziej says. In the era of next-generation sequencing, multiple tests will be run on the same sample. There are no codes for multiple testing, he says.
The problem has been noted by the National Institutes of Health, which is funding a study on reimbursement options for molecular diagnostics. “But they’re projecting results for 2017,” Kolodziej says; “2017 is infinity in this space.”
FDA’s Woodcock recognizes that the research continuum from personalized medicine discovery to patient is still far from optimal, but she is bemused by the contention by some that a reset button needs to be pressed. “Different people have different ideas of what personalized medicine is,” she tells C&EN.
With FDA’s help, momentum is building for targeted therapies, according to Woodcock. “We clearly designated a large number of breakthrough therapies; most of them are targeted. We are approving a large number of targeted drugs to specific genetic defects.”
Woodcock says FDA is working with clinical data experts from the pharmaceutical industry and the information technology sector to develop standards for trials tailored to targeted therapies as part of the Coalition for Accelerating Standards & Therapies, which was launched in 2012.
“We are going disease-by-disease to get to outcome measures,” Woodcock says. The group is working to coordinate standards for established methods of collecting information and those being developed for electronic medical records. “The goal is to do the research in the process of an ordinary health care encounter,” she says. “But I can tell you we are not there yet.”
Woodcock says the success of targeted therapies for cancer is proof that personalized medicine is gaining traction. Many observers note that most of the approved targeted drugs are for cancer, a therapeutic area that lends itself to genomic research. Woodcock, though, points to Kalydeco, a cystic fibrosis drug approved in 2012, as an example of a gene-targeting therapy in an area of unmet medical need other than cancer.
She also recognizes that some diseases will be tougher to crack with genomics. “If you want to talk about hypertension, schizophrenia, or diabetes, and you want to find the gene that causes that, well good luck.”
Pharmaceutical companies that have had targeted drugs approved agree with Woodcock that the era of personalized medicine has arrived. At Novartis, success with targeted drugs such as Ilaris, an arthritis drug, and Afinitor, a breast cancer treatment, validates a risky change in course that counts as a first foray by a drug firm into personalized medicine.
Meanwhile, Novartis’s BYM338, a monoclonal antibody in development as a targeted therapy for sporadic inclusion body myositis, has been awarded breakthrough status by FDA, as has its breast cancer drug candidate LDK378, for which the company has filed for registration following Phase II trials.
“The notion of looking at pathways and homogenous patient subsets before broad populations is something we try to do in every program. That’s our mantra,” says Evan Beckman, head of translational medicine at the Novartis Institutes for BioMedical Research (NIBR), adding that advances in genome screening have fed breakthroughs at Novartis. “These technologies have enabled the whole NIBR mission.”
Beckman says he is bullish regarding the progress of personalized medicine, but he is not surprised at the frustration expressed by others. “Any biological revolution gets translated into hype very quickly, and the real work takes work,” he says. “To me, we are really emerging into a golden age.”
Novartis has not had difficulty advancing basic research into the clinic, Beckman claims, because of NIBR’s strategy of assigning physician-scientists to research projects from their inception. By anticipating at the test-tube and animal-test stages what will be required at the first-in-human stage, the company is able to compile the most pertinent preclinical data to design effective trials.
The first fruits have been in oncology, he acknowledges, and personalized medicine continues to struggle for a foothold elsewhere. “Probably the most important limitation that has befallen the general medicine piece is matching the genetic DNA with a good phenotype of the patient,” he says.
Michael S. Vincent, vice president of biotherapeutic clinical R&D at Pfizer, is less sanguine about the new age of targeted therapies, acknowledging that proteomics and other “omics” technologies fell short of expectations. Some were even discredited. He now sees researchers navigating a structural gap between genomics-based target selection and the traditional clinical protocol.
“Whenever you start incorporating a data-dense information stream, the traditional clinical infrastructure kind of breaks down,” Vincent says. Next-generation sequencing data and clinical data have to be made to mix, however. Pfizer has established a separate pipeline to work with exploratory data, unburdening the process from a strict clinical protocol, according to Vincent. Thus sequestered, the data are interrogated to identify the analytes that make sense to develop in a way that he says will eventually pass regulatory muster.
“The focus is on the science,” he says. “You may not have a strong hypothesis going in, but you are going to collect enough data so that you may be able to make a robust inference at the end. There is no shame in doing hypothesis-free science.”
Eric Topol, director of Scripps Translational Science Institute, agrees with Novartis’s Beckman that any revolutionary change in medicine takes a long time to fold into the system. He points out that a great deal of human biology was not taken into consideration in the early days of genomics research.
“In recent years there has been marked appreciation of the gut microbiome,” Topol says. “I don’t think anybody back in 2003 had any idea how important that could be in regard to our immune system and our susceptibility to everything from obesity to diabetes to autoimmune disorders to cancer.” The impact of the epigenome and the human methylome on disease is also now appreciated.
“There is this whole parallel path of heritability that isn’t DNA dependent,” he says. “Finally, I don’t think that back then people knew we would have sensors that would measure any physiologic metric of man with a smartphone. And when you start doing that and start looking at somebody’s physiology in real time, you say, ‘Whoa, there is a lot of information there that you can’t get out of the DNA sequencing or other ‘omics.’ ”
Topol says he is more frustrated by what the clinical research community does know but hasn’t acted upon. “There is an incredible amount of knowledge about individualized treatments that we are not using,” he says. “I’m talking about drugs that are out there that are some of the most commonly used drugs where we know the particular genomic variant that predicts horrendous side effects or efficacy, and we don’t even use that information. It’s pathetic.”
He cites Tegretol, a neuropsychiatric therapy with potentially fatal dermatological reactions that in other countries is prescribed only after simple genotype safety testing. “Here in the U.S., we don’t test for it. We play Russian roulette, basically,” he says.
Similarly, Topol is critical of the pharmaceutical industry for not pushing forward on screening to find target populations for best-selling biologic drugs such as Humira, Remicade, and Enbrel. “These drugs have a collective $30 billion in sales per year and maybe at best a 30% clinical response rate,” he says. “Why are we not going full-court press on finding out who responds and who doesn’t, so we don’t waste $18 billion to $20 billion a year?”
Doing so would likely require more specialized testing, such as immune repertoire sequencing and antibody sequencing, in addition to genome sequencing, “but there are not enough aggressive attempts to go after it,” he says.
Several critics decry an innate intransigence in the health care industry that is working against personalized medicine. Munos at InnoThink points to huge savings that can be reaped by bringing clinical trials in line with modern digital technology. But doing so would likely meet with resistance among hospitals that stand to lose significant revenue if trials were crowdsourced.
“Let’s face it, many people in different organizations make a very comfortable living out of this dysfunctional state of affairs,” Munos says. Health care, he adds, will not easily be reformed, but it will be disrupted by technologies that create new access to data from patients. The system will be forced to change.
Pfizer’s Vincent notes that the traditional training of researchers will also be disrupted. “We’ve got a new breed of computational biologists that didn’t exist 15 years ago, and you need clinicians and clinical scientists who know how to talk to those people,” he says. “What is needed are people who have a multidisciplinary training, who can understand the clinical, medical, pathophysiological, and pharmacological bases of the disease and the patient—and turn them into something meaningful.”
Meaning, according to Poste at Arizona State, is defined by the patient, and the patient, in turn, is defined by data. “In the end,” Poste says, “everything we generate, irrespective of which technologies we use, should give rise to robust information that enables better clinical decisions and cost control.”