Advertisement

If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)

ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.

ENJOY UNLIMITED ACCES TO C&EN

Microscopy

Can researchers use AI to take microscopy to the next level?

From virtual staining to lensless microscopes, researchers are capitalizing on artificial intelligence to rethink microscopy

by Fionna Samuels
October 24, 2024 | A version of this story appeared in Volume 102, Issue 34

 

Two biopsy images. The tissue is stained pink, and small purple flecks are visible within both images.
Credit: Ozcan Lab/UCLA
When Aydogan Ozcan asked pathologists to compare the same tissue slice stained virtually (left) or chemically (right) with Congo red, the doctors deemed both appropriate for making a diagnosis.

For centuries, scientists have used microscopes to magnify and peer into a world invisible to the naked eye. The earliest instruments were simple lens-filled tubes, the best of which revealed the existence of single-celled organisms. Over the years, new technologies—from incorporating pinholes into light paths to gene editing samples for fluorescence expression—have allowed researchers to push the bounds of light microscopy. Each development has brought about new discoveries across multiple fields of research. Now, as computers become more powerful, artificial intelligence may be the next technological advance to come to microscopy.

Artificial intelligence, or AI, is an umbrella term that generally refers to a machine’s ability to imitate or surpass human capabilities for completing certain analytical tasks. In practice, AI today is often implemented with machine learning programs that use artificial neural networks to mimic how a brain functions. This allows the algorithm to learn patterns within data rather than being explicitly programmed to recognize such patterns.

The network in a traditional machine learning program is fairly simple and contains only one or two computational layers. But by incorporating more computational layers, often hundreds or thousands, scientists create deep learning programs, a more advanced type of machine learning that can self-correct an output as the output is computed. Ultimately, this enables the program to better process new data based on learned patterns.

With two of the three 2024 science Nobel Prizes honoring scientists using AI—the physics prize for those who built the foundational principles today’s AI algorithms rely on, and the chemistry prize for those using the technology—it’s not surprising that AI has also reached microscopy labs. From virtually staining micrographs to building smaller microscopes, scientists are using AI to revolutionize the field. Organizations and individuals alike are working to bring these advances to scientists across the globe.

Replacing chemicals with AI

Chemical staining has long played a central role in microscopy. In research laboratories, molecular stains help scientists identify organelles, differentiate cell types, and map cellular dynamics. In clinical settings, pathologists rely on stained tissues to make life-changing diagnoses for patients. In Aydogan Ozcan’s laboratory at the University of California, Los Angeles, researchers are taking a different approach. His team is working to replace the chemical specificity of molecular stains with the pattern recognition of AI. His ultimate goal: replace chemical staining with virtual staining, sending the former into obsolescence.

Virtual staining is aptly named; none of the color that ends up in the micrograph exists in the physical world. Instead, a researcher designs and trains an AI to identify relevant cellular structures in a micrograph, digitally coloring them to match a specific chemical stain. This means that a scientist can capture label-free images to digitally stain later.

Using label-free images allows researchers to capitalize on the benefits of label-free microscopy methods. These are often gentler on cells and tissues, says computer engineer Shalin Mehta at the Chan Zuckerberg Biohub San Francisco, because “you are not depositing a lot of energy in the cell, so the cells are healthy for a long time.”

There is no single way for a researcher to capture label-free images for virtual staining. If Ozcan could choose only one to use in his lab, it would be autofluorescence, a technique that captures the light that cellular components emit naturally rather than relying on fluorescent stains to generate light. Across the state, Mehta uses a quantitative label-free microscopy approach that relies on a computational technique to transform images of nearly clear cells into high-contrast micrographs based on the refractive index of various cellular structures. Regardless of their microscopy approach, both scientists have designed and trained machine learning algorithms to successfully color their micrographs.

I personally have an aversion to using human time to draw pixels. I just don’t think it’s the right use of human intellect.
Shalin Mehta, CZ Biohub San Francisco

Virtual staining for pathology

Although virtual staining is widely applicable to subjects that rely on microscopy analysis, Ozcan is primarily interested in using the technique in medicine. So, he is developing AI software that he thinks may transform histology, a key part of medical diagnosis that requires a tissue sample, usually taken by biopsy.

A dark, hard-to-interpret image of cells sits above four pink and purple images that look similar but feature slightly different shades.
Credit: Ozcan Lab/UCLA
Multiple virtual stains, each capturing important information that a pathologist could use to make a diagnosis, can be applied to the same slice of kidney tissue. Each equivalent chemical stain would require its own tissue section.

Anyone who has needed a biopsy is likely familiar with the traditional clinical workflow: a doctor removes a small section of questionable tissue and sends it off to a laboratory. There, a technician slices the sample, stains the slices, and snaps micrographs. Those images—or sometimes the entire tissue slides—are sent to a pathologist to analyze and issue a diagnosis.

In best-case scenarios, it can take hours for a technician in a histology lab to stain a sample. If the technicians are overburdened or the sample is sent while the lab is closed, it can take days for a pathologist to receive something to analyze. An aging reagent or lab mistake can make the image or slide unusable, requiring a technician to stain a new slice, further delaying a diagnosis.

Many disease biomarkers require chemically specific stains, which might interact with one another, an undesirable situation that makes multiplex imaging—using one slice to diagnose multiple conditions—difficult or impossible.

“It would be nice if we had all the tissue we needed, so we could cut as many sections as we want, and do all the types of stains we want. But it’s a limited resource,” says Liron Pantanowitz, chair of the department of pathology at the University of Pittsburgh. Virtual staining could help conserve tissue, he says.

“All kinds of problems are eliminated with virtual staining,” Ozcan says. Instead of taking the time to stain a slice of a tissue sample, a technician can immediately capture a micrograph, process it with the AI algorithm, and send it to a pathologist. Based on its training data, the program will digitally color the cellular components in that image as if they were stained with a chemically specific dye. This can save time, Ozcan says, and readily provide multiplex imaging.

Ozcan, an electrical and computer engineer, has coauthored seven journal articles demonstrating the power of virtual staining in histology over the last year. In his most recent paper, Ozcan and colleagues describe a deep learning program capable of virtually staining micrographs with Congo red, the gold standard dye for revealing deposits of misfolded proteins in tissue known as amyloids.

The scientists worked with medical personnel in histology labs to train the algorithm, feeding it before-and-after shots of biopsy specimens chemically stained with Congo red. After the AI learned from 386 pairs of images, the team tasked the AI with the job of staining a test image. It worked. In a blind test, when a pathologist was presented with the virtually stained image and its chemically stained counterpart, the doctor concluded that either one could be used for diagnosis (Nat. Commun. 2024, DOI: 10.1038/s41467-024-52263-z).

We need high ethical standards and regulations to make sure that we do this properly.
Liron Pantanowitz, University of Pittsburgh

Despite their promises, virtual staining programs run the risk of hallucinating—generating misleading images of patient samples. This might happen in one of two ways, Ozcan says. The program might color a micrograph in a completely nonsensical way. Or it might generate a perfect-looking, technically accurate image of a nonexistent sample.

Ozcan isn’t worried about the first type of hallucination: “One thing that is very important and distinct for virtual staining compared to natural language models,” he says, “is that it does not replace the pathologist. It’s not giving you a diagnosis.” Pathologists are experts at identifying poorly stained micrographs made by humans and will easily identify—and toss out—a nonsensically stained micrograph generated by AI.

“The second form of hallucination is more dangerous,” Ozcan says. “That is the one that a gatekeeper pathologist cannot identify,” because it’s a perfect imitation of a real patient specimen. He proposes adding another AI tool to score the likelihood that a virtually stained micrograph is a hallucination, thus giving a pathologist confidence that the image is not fake.

Pittsburgh pathologist Pantanowitz is unaffiliated with Ozcan’s work, but he’s enthusiastic about the role of virtual staining and AI more broadly in future pathology labs. It will take time and approval from the US Food and Drug Administration to bring the new technologies to clinical settings. “We need high ethical standards and regulations to make sure that we do this properly,” Pantanowitz says. Additional research-based evidence of the reasonable uses of virtual staining will help set guidelines for its use, he says. More practically, there will need to be a way to reimburse early adopters at medical practices.

“I bet you that if, all of a sudden, there was a billing code for virtual staining, we’d say, ‘Okay, fine, I’m going to try to take this on,’ ” Pantanowitz says.

Virtual staining for biology

While Ozcan is working to bring virtual staining into histology labs, San Francisco–based computer engineer Mehta is building computational microscopy platforms and AI algorithms for biology labs investigating cellular dynamics. “Our focus is on revealing new biology and revealing dynamics of the system,” he says. Virtual staining illuminates cellular interactions “in a way that has not been possible before.”

Credit: Ivan Ivanov, Eduardo Hirata-Miyasaki, Talon Chandler, Shalin Mehta/CZ Biohub San Francisco
As Mantis simultaneously collects individual channels of information (left), the images can be overlaid (right), revealing more interactions between cellular components

To this end, Mehta, his team, and their collaborators from the University of California, San Fransisco, developed the Mantis microscope (PNAS Nexus 2024, DOI: 10.1093/pnasnexus/pgae323). It was given the name because “like a mantis shrimp has high-dimensional vision, so does our microscope,” Mehta says. “It’s a system that allows you to do fast label-free imaging and fast fluorescence imaging.”

Mantis uses the changing phase and polarization of light passing through a sample to generate label-free images. In Mehta’s lab, samples are comprised of cells, tissues, and sometimes entire live organisms. To the human eye, he says, many cells appear clear because humans can’t detect phase or polarization changes, instead only seeing hard-to-interpret shadows.

Two side-by-side images of cells. The left has a gray backgroun and is splotched with green and pink highlights. The right has the same grey background but has blue blobs inside the grey cells and orange outlines.
Credit: Eduardo Hirata-Miyasaki, Ziwen Liu, Shalin Mehta/CZ Biohub San Francisco
Incorporating virtual staining into the Mantis microscope workflow allows researchers to chemically stain cell membranes (left, pink) and mitochondria (left, green) and then add in virtual stains where the chemical stain fails to adhere to the cell membrane (white dashed box, orange) or supply more information about nuclei (right, cyan).

By shining polarized light onto a sample and snapping multiple 3D images of the sample as the plane of polarization rotates, Mantis captures how those shadows change within cells. The snapshots are then fed into a physics-based computer algorithm that measures the minute phase and polarization changes, combines the images, and transforms them into a high-contrast micrograph. Coupled with simultaneously captured fluorescence images, the final images reveal physical and molecular characteristics of a sample.

The team designed Mantis to capture as much information about a sample as possible, Mehta says. In addition to a channel collecting phase and polarization information, the microscope also has two different channels to simultaneously capture two sets of fluorescence data. “The bottleneck in using methods like fluorescence has been that, typically, you cannot encode more than three or four channels,” Mehta says.

Virtual staining can open the floodgates for adding more information to an image; AI can identify several cellular components and add them to a micrograph already showing those physically labeled with a fluorescent stain. The program can also fill in gaps where fluorescent stains do not effectively adhere to their targets. In practice, this means that Mehta can chemically stain mitochondria and lysosomes within cells and then virtually stain their cell membranes and nuclei, effectively doubling the amount of information within an image.

It’s a trick to create a deep learning algorithm capable of interpreting micrographs of cells with differing shapes and at various magnifications because the application of an AI is generally limited by its training data. But Mehta and his team want to make their platform widely usable. “Our goal is to map dynamic cell systems across many scales,” he says, “that is our biological North Star.”

Mehta and his team have taken a few approaches to training their AI to make it more robust (bioRxiv 2024, DOI: 10.1101/2024.05.31.596901). For one, he says they “create training tasks that are typically harder than what is the end goal, and that enforces that the models learn more semantic representations of the data.” So instead of training the program to recognize only one information channel—perhaps cell nuclei—they ask it to recognize many—nuclei, membranes, and more.

Advertisement

Another way the researchers make the program more robust is by augmenting their training data with physics-based simulations of real data. The choice of microscope, objective lens, and excitation wavelength will all change the appearance of a micrograph, Mehta says. It’s not feasible to collect enough training data for an AI to recognize the same information across all those variables.

“Instead of collecting the experimental data, we simulate the process of data generation from the data we already have,” he says. “That allows us to train the models that generalize across instruments.” The augmented data provide a large pretraining library for the AI to learn from, but it is still vital that researchers fine-tune the program with their own relevant data.

Ultimately, the power of virtual staining comes down to saving a researcher time. Using AI, a scientist won’t need to spend time manually identifying different cellular components in a label-free image. “I personally have an aversion to using human time to draw pixels,” Mehta says, “I just don’t think it’s the right use of human intellect.”

Machine learning for lensless imaging

Three images. One, labeled Input, sits above the other three and is a blur of orange and green. The three images beneath are labeled Physics-based, Blended, and Machine-learning-based. Each shows a butterfly at varying levels of blurriness and pixelation.
Credit: Kristina Monakova and Laura Waller/UC Berkeley
By including physics algorithms in the architecture of their machine learning program (Blended), Laura Waller's team is able to de-blur images captured with a lensless camera (Input) more effectively than by using pure physics algorithms (Physics-based) or pure machine learning programs (Machine-learning-based).

Virtual staining is not the only target use of AI in microscopists’ sights. In Laura Waller’s computational imaging lab at the University of California, Berkeley, she and her students are developing machine learning programs to build lens-free cameras and microscopes.

“The only job of the lens is to bend light rays in a particular way to form the image,” explains Waller. Because the way light interacts with an object can be mathematically predicted with physics-based computer algorithms, Waller says one could wonder, “Can we bend the light rays computationally to reconstruct the image?”

If so, scientists could eliminate the need for bulky objective lenses and design smaller microscopes. Without lenses, images will appear as meaningless blurs, and AI can’t simply transform any fuzzy image into a crisp one. To digitally reconstruct such a photo, researchers must have a complete understanding of how the light has been scattered from a surface. To facilitate this, Waller and her team characterize and design surfaces that scatter light onto a sensor in a known way.

“You get a picture that looks like garbage,” Waller says, but because the team knows how light should scatter from the diffuser’s surface, they can use a model to work backward and reconstruct what the true image should look like. “Think of it like your optics encode the information,” she adds, “and then your algorithm decodes it.”

The team tested computer algorithms built purely on machine learning and physics to de-blur images. With pure physics, the model was slow and produced “weird artifacts” in the final image, Waller says. The team’s pure machine learning model was fast, but it also produced artifacts.

But when the team folded physical principles into the architecture of the AI algorithm, the resulting program was computationally quick and produced higher-quality images than the other approaches (Opt. Express 2019, DOI: 10.1364/OE.27.028075). “It was actually really beautiful, because the answer was that working together is better,” Waller says.

Waller reasoned that if machine learning programs could help decode the information in blurry images, they could probably also help encode that information. So she and her team built a computer model to design a more effective diffusion surface. Although they’re still in the process of parameterizing it—“it can’t have arbitrary design capabilities,” Waller points out, because the surfaces need to be fabricated in the real world—the AI worked. The surface it designed takes qualities from both refractive and diffractive optics, each of which bends light in slightly different ways. Combining the new AI-designed surface with the de-blurring AI created crisper images.

Although the surface’s blended approach to scattering light makes it a challenge to fabricate, the improved image quality makes it worthwhile, Waller says. “Now we’re using it for real applications in microscopy,” she adds.

Bridging disciplines

Even as AI becomes ubiquitous in daily life, there is still a wide gap between the promises of the technology and its integration into the average microscopy lab. The team at European project AI4Life hopes to bridge that divide by connecting life scientists and computer scientists. The former group provides complex real-world microscopy data, while the latter can build bespoke AI programs to help analyze that data, says Beatriz Serrano-Solano, an outreach coordinator at AI4Life.

Scientists can engage with the program in a few ways, Serrano-Solano says. A microscopist can peruse the BioImage Model Zoo—an open access repository of pretrained AI models—for a program that might fit their needs. If none is quite right, the scientist can submit their problem during one of AI4Life’s Open Calls, where “we offer support for deep learning analysis for scientists,” Serrano-Solano says. And for computer scientists, Serrano-Solano says the project organizes data science competitions called Challenges, where researchers “compete to get the best solution or to get the best performance method.”

The organization isn’t collaborating just with individuals; they also have a working relationship with microscope manufacturer Leica Microsystems. “We have this collaboration with them in which they can retrieve the Zoo models,” Serrano-Solano says. These are then integrated into Leica’s Aivia software, a deep-learning image-analysis tool the company is developing.

Leica is not the only microscope company developing AI-based tools. Evident—previously known as Olympus Scientific Solutions—and Zeiss are also developing machine learning algorithms to help microscopists analyze images.

From their research labs, Ozcan and Mehta are taking different approaches to sharing their AI programs more broadly. Ozcan cofounded Pictor Labs in 2019. The UCLA spin-off company is developing virtual staining programs for histology and recently raised $30 million in series B funding.

Rather than commercializing their software, Mehta and his team are committed to publishing all their computational and deep learning methods in open access repositories. “Open sharing and open exchange are the strength of the community,” he says. By embracing those principles, Mehta hopes their AI algorithms will be widely adopted.

Article:

This article has been sent to the following recipient:

0 /1 FREE ARTICLES LEFT THIS MONTH Remaining
Chemistry matters. Join us to get the news you need.