When voice-activated assistants first appeared on the market, they were perceived as a high-tech frill. Science fiction enthusiasts, reminded of the HAL 9000 computer in the movie 2001: A Space Odyssey, amused their friends by stumping Amazon’s Alexa with commands to open the pod bay doors. Meanwhile, Apple had to prime its Siri iPhone voice assistant with searing comebacks to the question: What are you wearing?
But pretty soon, consumers got serious about voice-activated devices, routinely asking the ever-ready, ever-listening vocal presence in their living space for a check on the weather or to cue up the latest from Taylor Swift.
Voice activation is now becoming an even more serious technology, completing its journey from science fiction to fact-based science tool with the emergence of voice-activated laboratory assistants.
The trick will be upgrading the natural language processing (NLP) programs underpinning home voice systems to recognize chemical names, technical jargon, full catalogs of laboratory products, and a battery of standard chemical reactions. Beyond that, connections will have to be made to gadgets of far greater technical complexity than a coffee maker—mass spectrometry and chromatography instruments, for example.
Amazon, Google, and Apple, the NLP leaders, are not pursuing these upgrades. But laboratory informatics firms have started to build voice-activated assistants into their electronic laboratory notebooks (ELNs), laboratory information management systems (LIMSs), and lab instruments. And chemistry-primed versions of voice-assistant technology are already being deployed in laboratories by start-ups working in the field.
One of these start-ups, HelixAI, has been working for the past 2 years to develop an app that can set up Amazon’s Alexa as a lab voice assistant. The company was one of nine software development firms accepted last year into an Amazon accelerator program in which start-ups develop what Amazon calls skill sets—adjunct programming that adapts Alexa to specific work environments.
As its skill set, HelixAI has developed a syntactic template that it customizes for use in specific laboratories, according to CEO James Rhodes. “The template is going to have the phrases you would use to ask Alexa to do something,” Rhodes says. He explains that queries such as “What is the molecular weight of . . .” exist in the template with placeholders for the names of specific chemicals, reagents, or other terms and information common in a given lab.
The approach of providing a customized template rather than a comprehensive version of Alexa for, say, chemistry research not only tailors the Helix app for a particular lab but also keeps the voice technology from getting overwhelmed.
“Science has a lot of information, a lot of technical jargon, and a lot of vocabulary words,” says Rhodes, a software developer who is working on Helix with his wife, DeLacy Rhodes, a microbiologist and assistant professor at Berry College. “If you try to shove everything into one single Alexa skill, the voice interaction model and the reliability of Alexa understanding what is said degrades rapidly.”
Customizing the template is done by Helix with a software development kit supplied by Amazon. The application is “trained” via text entry in a process similar to HTML programming, whereby metadata can be tagged to each entry for guidance on pronunciation. “You never have to talk to train the system,” James Rhodes says.
Once the template is loaded, the user accesses it via an Amazon account. It can be used with any Alexa-accessible device, including the Amazon Echo Dot and Echo Show models, Amazon’s Kindle Fire, or a mobile device such as a cell phone.
One of Helix’s initial customers is New England Biolabs (NEB), a supplier of reagents for molecular biology that signed up Helix to develop a voice-activation system called myNEB it introduced to customers earlier this year.
NEB began working with Helix on its restriction enzymes, according to Penny Devoe, NEB’s associate director of portfolio management for DNA cloning. This meant entering not only the names of enzymes but also a cascade of associated properties, Devoe says: “What is the cut site, what is the temperature, what buffer does it work in? If I want to cut with these two enzymes, what buffer should I use? Can you stop the reaction by heating it up? All of these kinds of basic questions are things that we are focusing on with myNEB.”
The user case for voice activation? Fast, convenient access to any of this information, Devoe says. “This is just another way for our scientists to gather information—they can look it up in an old-fashioned catalog that’s been around since the 1970s, they can go to the computer and type in what they are looking for, they can call us at tech support and ask us the question, or they can just ask myNEB for the answer.”
LabTwin, a Berlin-based start-up with financial backing from BCG Digital Ventures and the lab supply firm Sartorius, began work on a lab voice assistant in 2017. It recently launched a mobile phone–based assistant that, in addition to fielding basic questions, can log notes and access data from laboratory software such as ELNs and LIMSs.
The product, added to a phone as an app, was developed using NLP technology from one of the main suppliers. “We are standing on the shoulders of a giant,” says Magdalena Paluch, cofounder and CEO of LabTwin, declining to disclose the company. But Paluch says product development has depended more on another established source of know-how—scientists in the lab.
LabTwin identified voice-activated documentation as a potential answer to an unmet need in the lab—an automated means of entering and accessing data in an otherwise fully automated work space. Annotation in laboratories is currently a hodgepodge because of scientists’ varying work routines and requirements, Paluch says.
In addition to being hobbled by gloves or unable to access lab notebooks or other informatics, scientists often monitor experiments that generate data that need to be documented, Paluch says. Notes are made on the fly, sometimes on sticky notes and napkins, she says.
“In the worst case, scientists will try to remember things,” Paluch says. “They told us it would be easier if they could just say it.”
Like Helix’s lab assistant, LabTwin’s basic product is a template or model that it tailors to specific users. “Every lab comes with its own ontology and unique workflows,” says Guru Singh, LabTwin’s head of growth. “We have to train the model.” This is done primarily via text programming, but voice input can be useful to accommodate accents and anomalous pronunciations.
LabTwin users who enter data by voice on mobile devices can later access the information via a web page so it can be melded with a central data repository. LabTwin has partnerships with lab informatics and instrumentation firms that allow for automatic data integration. And earlier this year, the company introduced a voice-command interface allowing access to data from informatics systems both inside and outside the lab. Managers at Helix and LabTwin say future iterations of their voice assistants may allow for user customization of basic templates.
Researchers in labs that hosted pilot tests of voice-activated assistants give the technology high marks for convenience and accuracy. They weren’t necessarily in the market for the voice systems when contacted by the suppliers, however.
Jason Furrer, an associate teaching professor of molecular microbiology and immunology at the University of Missouri School of Medicine, a longtime NEB customer, agreed to host a test of myNEB in the microbiology lab he manages. “We use their restriction enzymes, DNA and RNA, purification kits, and specialized reagents,” he says. “When they approached me to beta test myNEB, I was happy to do so. It seemed like a lot of fun to try.”
It turned out to be a handy and reliable tool, Furrer says. With an Echo Dot or an Alexa app installed on a cell phone, researchers can access product manuals, procedures, and protocols by voice, getting an immediate response.
“The biggest advantage is that I don’t have to stop what I am doing and take off a pair of gloves,” Furrer says. “I can just ask if a reaction requires a specific reagent, and it will answer me. If I ask the temperature at which I can inactivate an enzyme, it will tell me that.” And it can turn music in the lab on and off via voice command, he says.
Furrer says the answer given by myNEB is almost always correct. And response accuracy improves with use, enabled by the machine-learning capabilities of Alexa’s NLP programming. There is, however, the occasional foul ball. “Once in a while you ask about something like the BamH I enzyme and you get reports and stats about the Great Bambino, Babe Ruth,” he says. “But if you say ‘myNEB’ before asking the question, then it knows what database to go into. We kind of learned that pretty quick.”
Ernesto Diaz-Flores, assistant adjunct professor of pediatrics at the University of California, San Francisco, School of Medicine, has since last October hosted a pilot installation of LabTwin in his lab, where research is focused on childhood leukemia. He came across the product when he was investigating ELNs, he says. A sales representative from Sartorius told him about LabTwin’s work on a voice-activated lab assistant, and it struck him as a possible alternative to a traditional ELN.
“Technology has advanced everything we do with experiments, but not the note taking,” Diaz-Flores says, describing the hectic environment in which annotation takes place. “When you are in the lab, you move from room to room because the technologies are in different rooms. All the information you carry is written on paper. Sometimes you make changes in your protocol, you make a note, and you lose it.”
LabTwin, he learned, offered a solution. “Having your notes activated without using your hands is a big plus,” he says. “Using your cell phone to record any notes and also take pictures and integrate it into your IT system . . . this was something new.”
Although LabTwin is not an ELN, Diaz-Flores says it serves the primary functions he was looking for in an ELN. “You can write reports with it, integrate images and voice notes,” he says. “It gives me the structure to make the report and even share it with lab members. Because of that I use it for my ELN.”
Several large lab supply companies and at least one chemical company are also floating new voice assistants.
Among the pilot testers in the field are approximately 100 customers of Evonik Industries’ paint and coating ingredients business, where a voice-activated assistant has been in the works for 2 years. Evonik envisions Coatino as an aid outside the lab as well, with applications in marketing, purchasing, and manufacturing, says Oliver Kröhl, vice president of business development.
Evonik debuted Coatino as a work in progress at the European Coatings Show in Nuremberg, Germany, in March.
Thermo Fisher and Waters are among the lab management software and instrumentation companies that say they are exploring voice technology. LabVantage Solutions, a supplier of research software including LIMSs and ELNs, is further along and ready to talk about its voice activation product in development, for which the company is using NLP programming from Google.
Like Evonik, LabVantage has given customers a look at what it’s working on with voice-assistant technology. Senior R&D architect Gary Stimson, who initiated the Lottie (LabVantage Open Talk Interactive Experience) project and demonstrated it at the company’s North American user conference last fall, says positive customer feedback was tempered by cost considerations. But Robert Voelkner, a LabVantage marketing manager, says a number of customers have indicated they might be willing to pay for voice assistance, “which is of course music to my ears.”
Most voice assistants are sold on a tiered subscription model, with payment determined by the number of devices in laboratories and the level of interaction with lab informatics.
There are concerns beyond potential sticker shock. Stimson points out that cloud security is an open question, given that data translation using cloud-based NLP technology such as Google’s means that information is cycled out of the lab, into the cloud, and back. Voice technology vendors may pursue developing their own NLP programming to keep data within the user’s lab. James Rhodes at Helix notes that once a voice system is configured to turn instruments on and off—a feature not yet developed for the Helix lab assistant—the mere mention of an instrument could activate equipment. This might be addressed by requiring explicit verbal commands or codes for interaction with instrumentation.
And there is always a level of user resistance to new technology in the laboratory. But Diaz-Flores at UCSF says the pushback by his staff on voice assistance was minimal—some researchers equated it with ELN technology, which tends to introduce changes in how work is done. Diaz-Flores assured his team that a voice system is simply a voice system—a gizmo that allows them to speak rather than take notes or to ask for information rather than leave the bench to look it up.
And it is a familiar gizmo, given the ubiquitous deployment of voice systems in everyday life. So why not at work? “For me,” Diaz-Flores says, “it is just becoming a very versatile tool for organizing my workflow.”