Quantum-enabled proteins open a new frontier in biotechnology

A research team led by the University of Oxford’s Department of Engineering Science has shown it is possible to engineer a quantum mechanical process inside proteins, opening the door to a new class of quantum-enabled biological technologies.

The study, published in Nature, reports the creation of a new class of biomolecules, magneto-sensitive fluorescent proteins (or MFPs), that can interact with magnetic fields and radio waves. This is enabled by quantum mechanical interactions within the protein, and occur when it is exposed to light of an appropriate wavelength.

While quantum effects have previously been shown as central to some biological processes (such as navigation in birds), this is the first time they have been engineered to create a new family of practical technologies. This marks a shift from observing quantum effects in nature to deliberately designing them for real-world use.

Potential applications in biomedicine

The researchers are already exploring applications of these technologies in biomedicine. As part of the study, the team created a prototype imaging instrument that can locate the engineered proteins using a similar mechanism to Magnetic Resonance Imaging (MRI) widely used in hospitals.

However, unlike MRI, it would be able to track specific molecules or gene expression within a living organism. Such measurements are central to medical challenges including targeted drug delivery and monitoring genetic changes inside tumors.

How the engineered proteins were created

To generate the engineered proteins, the research team used a bioengineering technique known as directed evolution. In this method, random mutations are introduced to the DNA sequence encoding the protein, creating thousands of variants with altered properties. High performing variants are selected from this collection, and the process is repeated. After many consecutive rounds of directed evolution, the selected proteins had a dramatically improved sensitivity to magnetic fields.

Interdisciplinary collaboration and future directions

Achieving the breakthrough required an ambitious interdisciplinary approach, linking expertise in Engineering Biology, Quantum, and Artificial Intelligence—three innovation areas recently highlighted as central to the UK’s Industrial Strategy. This study is thought to be the first in which their intersection has been exploited to create a new technology.

Gabriel Abrahams, first author of the paper and Ph.D. student in the Department of Engineering Science, described the work as “a hugely exciting discovery. What blows me away is the power of evolution: we don’t yet know how to design a really good biological quantum sensor from scratch, but by carefully steering the evolutionary process in bacteria, Nature found a way for us.”

The senior author of the study, Associate Professor Harrison Steel of the Department of Engineering Science, said, “Our study highlights how difficult it is to predict the winding road from fundamental science to technological breakthrough. For example, our understanding of the quantum processes happening inside MFPs was only unlocked thanks to experts who have spent decades studying how birds navigate using Earth’s magnetic field. Meanwhile, the proteins that provided the starting point for engineering MFPs originated in the common oat.”

Professor Steel continued, “We are immensely grateful to the supporters of our work, which has been instrumental in enabling our interdisciplinary vision to carry out bioengineering alongside robotics, control algorithms, and AI, all in one lab.”

Following the success of this project, the team is now accelerating work to realize the many applications of their discovery, and to further our understanding of quantum effects in nature as part of a major recent BBSRC project led by Oxford’s Department of Chemistry.

The team was led by Oxford’s Department of Engineering Science, with collaborators in the Department of Chemistry, and international partners from Aarhus University, the Royal Melbourne Institute of Technology, Sungkyunkwan University, and the US-based company Calico Life Sciences LLC.

New AI tool removes bottleneck in animal movement analysis

Researchers from the University of St Andrews have developed an AI tool that reads animal movement from video and turns it into clear, human-readable descriptions, making behavioral analysis faster, cheaper, and scalable across species.

The PoseR plug has been developed to remove a major bottleneck in neuroscience, psychology and biology to enable larger, faster, and more reproducible studies. The work is published in Open Biology.

Animal behavior is a window into how their brains work, or not, in the case of disease. Traditionally, scientists have spent hours manually scoring videos to analyze behavior, often taking weeks or months per project. Speeding this up can therefore have a big impact.

How the AI tool works

This new breakthrough makes behavior analysis much faster and more consistent, helping researchers study animal actions and brain function more efficiently.

The AI tool uses a method called “Graph Neural Networks.” These networks can be fed data in the shape of graphs which correspond to the varying shapes of animals. It allows the user to determine what the subjects in the recording are doing, and gives categories of behavior.

The tool was developed by a team of researchers led by Dr. Maarten Zwart from St Andrews School of Psychology and Neuroscience, whose research looks at how the brain produces behavior. They are particularly interested in how movements are produced, and using this tool will speed up their analyses.

He said, “Our team aims to find out how the brain and spinal cord interact to produce all the different movements that we and other animals make. This tool, which started as a COVID lockdown project, is an exciting new development driven by Dr. Pierce Mullen, along with Dr. Holly Armstrong, our research technician, and two amazing undergraduate researchers, Beatrice Bowlby and Angus Gray.”

It’s hoped that by removing a major bottleneck in neuroscience, psychology and biology, it will enable larger, faster, and more reproducible studies as well as rapid screening of animal models for disease and, thus, eventually make it easier to make fundamental discoveries and find cures for neurological diseases.

Tracer reveals how environmental DNA moves through lakes and rivers

Forensics experts gather DNA to understand who was present at a crime scene. But what if the crime occurred in the middle of a lake, where DNA could be carried far and wide by wind and waves? That’s the challenge faced by aquatic ecologists who study environmental DNA (eDNA) to monitor endangered animals, track invasive species, or monitor fish populations.

A team of ecologists and engineers from Cornell and the University of Granada has made a breakthrough in understanding eDNA movement in water. Researchers developed synthetic DNA that mimics the behavior of eDNA, released some of it in Cayuga Lake near Cornell’s Ithaca campus, traced its movement for 33 hours, then incorporated their findings into a new model that can predict where a sampled particle of eDNA likely originated in a water body.

The research was published in Environmental Science & Technology.

“Over the past 15 years, advances in molecular methods have expanded eDNA from single-species detection to community-wide biodiversity monitoring, often making it faster, cheaper and more sensitive than traditional survey methods,” said Jose Andrés, paper co-author and senior research associate in the College of Agriculture and Life Sciences (CALS).

“One of the key challenges, especially in large freshwater and marine environments where eDNA may be mixed quite deeply into the water column and currents may be strong, is knowing when detected DNA was released by the source organism, and how far away the organism was.”

Development and application of synthetic DNA tracer

The synthetic DNA tracer was created by Zeyu Li, paper first author and a doctoral student in the lab of Dan Luo, another co-author and professor of biological and environmental engineering. The synthetic DNA tracer was made up of short and unique DNA sequences and then encapsulated in a safe, biodegradable polymer frequently used in the food and pharmaceutical industries. Only 1 microgram of DNA (one-thousandth of 1 gram) was released in the lake, he said.

“This paper was an exciting collaboration, especially because it has a real scenario of application,” Li said. “This type of experiment could be repeated in Lake Ontario, or even the Atlantic Ocean—we have the capability of doing that. I believe that our technology is one that can make a real impact for the world.”

The combination of expertise in genetics, biological engineering and ecology enabled researchers to study questions holistically and to make basic scientific advancements in multiple fields, said Todd Cowen, professor of civil and environmental engineering in Cornell Engineering and study co-author.

“High-dynamic-range studies of dispersion, fate and transport in these highly stirred and mixed aquatic environments is critical to managing our freshwater, estuarine and coastal marine resources,” said Cowen, who is also a Cornell Atkinson faculty fellow and directs the DeFrees Hydraulics Lab. “This approach is a game changer.”

Implications for conservation and management

Environmental DNA is a cheaper, faster, more accurate tool for fish and wildlife managers, as opposed to traditional survey practices like catching animals, said David Lodge, study co-author and the Francis J. DiSalvo Director of Cornell Atkinson. Lodge leads a nationwide effort to accelerate the adoption of eDNA in federal decision-making as chair of the Policy Subcommittee of the Marine Technology Society’s Environmental DNA Committee.

Regulators might use eDNA data to determine the environmental impact of an offshore energy installation, to track the population size of an endangered species, to see whether cargo ships are introducing invasive species or to assess the population of a commercially exploited fish species.

“You can’t manage what you can’t measure, and many traditional technologies for measuring biodiversity are hopelessly laborious, expensive or altogether infeasible,” Lodge said. “The rapid loss of biodiversity in aquatic ecosystems requires effective, scalable tools to assess biodiversity and monitor change over time, and eDNA is certainly one of those tools.”

Detecting drought stress in trees from the air

Increasing heat and drought are putting our forests under stress. Researchers at the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL) have used drone imagery to investigate how native tree species are responding to climate change. This measurement method opens up new possibilities for monitoring forests over large areas and documenting species-specific strategies for coping with drought.

During the hot summer of 2023, researchers at WSL used drone footage to investigate how seven native tree species react to drought. Using special cameras, they identified species-specific differences in how trees cope with excessive sunlight and in the discoloration and defoliation of tree crowns.

The measurements enabled Petra D’Odorico, a geographer at WSL, and her team to detect acute and persistent water shortages from the air. In the future, this could help to monitor large areas of forest and identify which tree species are best able to cope with climate change. The findings are published in the journal Agricultural and Forest Meteorology.

Climate scenarios predict that the summer months will become hotter and drier. In order to increase the resilience of forests to climate change, forestry experts need to know how different tree species cope with higher temperatures and water shortages. The challenge is that these processes are highly complex. In addition, each tree species reacts differently to heat and drought, depending on its location. Until now, such studies have been costly and usually only possible for individual trees. Remote sensing using drones, airplanes or satellites is changing this.

“We wanted to use drone imagery to find out how native tree species respond to drought and what strategies they employ. We did this both over the entire growing season and over the course of a single day,” explains D’Odorico. She used special cameras to analyze changes in the crowns of seven native tree species (sycamore maple, oak, Norway spruce, hornbeam, European beech, Scots pine and silver fir).

To do this, she repeatedly flew over the mixed forest of the Swiss Canopy Crane II (SCCII) research area of the University of Basel in Hölstein (BL) during the hot summer of 2023, where the data from the air could be compared with measurements taken on the trees.

Using drone imagery, D’Odorico was able to identify species-specific responses to drought. “For example, we observed that oak trees recover more quickly from a hot previous day than other tree species,” she explains, “or that conifers show delayed signs of drought stress and then suddenly die.”

This measurement method could help to monitor large areas of forest in the future. D’Odorico is now also studying non-native trees, such as the Lebanon cedar and the Oriental beech. Her aim is to identify species that could replace native trees in regions that are particularly affected by climate change.

Making drought stress visible

D’Odorico and her team used multispectral cameras that can also capture invisible areas of the light spectrum. The cameras detect a special pigment that trees produce to protect their leaves from excessive sunlight during drought conditions. “This allows us to identify whether a tree is under acute stress even before damage is visible to the naked eye,” says D’Odorico.

However, this short-term reaction does not tell the whole story. If the drought persists, the leaves change color or fall off. In addition to light protection, the researchers therefore measure how green and how densely leafed the tree crown is. “The combination of both measurements gives us a better overview of exactly what is happening,” summarizes D’Odorico.

Sugarcane hits the sweet spot for sustainable carbon

When anyone talks about the future of sustainable aviation fuel, one question dominates: how do we replace fossil carbon without compromising food security or biodiversity? Experience leads some researchers to believe the answer is sugarcane.

At the ARC Research Hub for Engineering Plants to Replace Fossil Carbon, scientists are working with global technology leaders using the latest science to solve this challenge. Emeritus Professor Robert Henry, The University of Queensland, published a perspective in Agriculture Communications.

The mission is ambitious: create economically viable, renewable aviation fuel at the scale this huge industry needs. Plants are the best current source for renewable carbon, and all available evidence suggests sugarcane is the plant offering the best chance of success.

Sugarcane is the only crop currently produced at the tonnage demanded for this purpose.

Other options, like algae, sound promising in theory, but the economics don’t add up. Algal systems are expensive, and unless you can generate a high-value co-product, they will not be cost-effective.

Canola is another option being considered internationally. A lot of Australia’s canola is being exported into Europe and turned into fuel rather than food, so that is a food issue. But more than that, it’s a scale issue. The yield of canola is a few tons per hectare, so the land you’d need to grow enough to replace jet fuel is more than the area of Australia.

Land use is a critical consideration. So, what crops can you scale? Sugarcane is productive and grown on a relatively small proportion of land. Its small footprint and exceptional productivity make it the standout choice.

Research at the Hub focuses on improving plant biomass to convert it into fuel. They are exploring genetic changes in rice, sorghum and sugarcane to increase the proportion of biomass that can be converted without sacrificing yield.

Rice is the model system as it’s easier to manipulate genetically. Once researchers identify promising changes in rice, they will test some of the most effective in sorghum, sugarcane’s closest relative among domesticated crops. If it works in sorghum, they can transfer the most promising alternatives to sugarcane, where the genetics are more complex. This tiered approach saves time and resources and increases our chances of success.

Australia and Queensland, in particular, are uniquely positioned to lead this transformation. There is a well-established sugarcane industry, strong research capability, and global partnerships that can accelerate innovation.

Australians fly more than almost anyone else, so the demand for sustainable aviation fuel is enormous in the domestic market. If sustainable fuel can be produced commercially, the world will quickly adopt it. The Hub’s aim is to find ways to reduce costs further, making the investment compelling.

The challenge now is clear: breed crops with a high proportion of convertible biomass without sacrificing yield. It’s a complex puzzle, but one worth solving.

Two-step genome editing enables creation of full-length humanized mouse models

Understanding human gene function in living organisms has long been hampered by fundamental differences between species. Although mice share most protein-coding genes with humans, their regulatory landscapes often diverge, limiting how accurately mouse models can mimic human biology.

One promising solution is full-length gene humanization (FL-GH), in which entire mouse loci—including coding sequences, introns, untranslated regions, and regulatory elements—are replaced with their human counterparts. Yet existing technologies have struggled to insert very large genomic fragments efficiently or reliably, slowing efforts to develop physiologically relevant humanized models.

Introducing the TECHNO platform

To address these long-standing challenges, a research team led by Associate Professor Manabu Ozawa and Associate Professor Jumpei Taguchi from The Institute of Medical Science, The University of Tokyo, Japan, has developed a streamlined two-step strategy for FL-GH.

Their study, published in Nature Communications, introduces TECHNO (Two-step ES Cell-based HumaNizatiOn), a method that integrates CRISPR/Cas9-assisted genome editing with bacterial artificial chromosome (BAC)-based delivery of large human genomic regions. This framework offers a practical and scalable solution for replacing entire mouse loci with their human counterparts.

“Our results demonstrate a robust and broadly applicable platform for generating FL-GH mouse models,” says Dr. Ozawa.

How TECHNO enables gene humanization

The TECHNO workflow unfolds in two coordinated steps. First, the target mouse locus is excised using Cas9 ribonucleoproteins and replaced with short human homology arms surrounding a selection cassette, creating a precise genomic landing site.

In the second step, a BAC carrying the full-length human gene and its regulatory elements is introduced into embryonic stem cells alongside a universal gRNA targeting the selection cassette, enabling homology-directed integration of genomic fragments exceeding 200 kbp. Because the method relies on standard molecular reagents and widely available BAC libraries, it is theoretically applicable to more than 90% of human genes.

Demonstrated applications and results

Using this platform, the team successfully humanized several loci, including c-Kit, APOBEC3, and CYBB.

Humanization of c-Kit reproduced human-like alternative splicing and organ-specific expression while supporting essential biological functions such as hematopoiesis and spermatogenesis. Replacement of the APOBEC3 locus demonstrated the scalability of the approach, integrating over 200 kbp of human DNA spanning seven genes and generating expression patterns that mirrored those observed in humans.

The researchers also established a humanized CYBB allele and introduced disease-associated mutations to model chronic granulomatous disease. The resulting mice displayed impaired reactive oxygen species production, faithfully recapitulating the molecular phenotype found in patients.

Implications for research and medicine

In the near term, TECHNO is expected to accelerate the development of precise, human-relevant animal models for evaluating therapeutic targets, validating disease-associated variants, and identifying ineffective drug candidates earlier in research pipelines.

Over the longer term, scalable FL-GH may reshape biomedical research by enabling models that more faithfully mimic human gene regulation and disease mechanisms.

These advances also set the stage for integrating humanized models into AI-driven comparative genomics, large-scale humanized allele panels, and systems biology frameworks.

As Dr. Ozawa states, “Overall, these results demonstrate that our method enables not only FL-GH of individual loci but also precise modeling of human genetic diseases in vivo by introducing disease-associated mutations into humanized alleles.”

By enabling stable, high-efficiency integration of genomic fragments exceeding 200 kbp while preserving complex regulatory behavior in vivo, the TECHNO platform represents a major advance toward next-generation humanized mouse models. Its versatility, robustness, and reliance on standard laboratory tools position it as a foundational technology for advancing functional genomics, disease modeling, and translational medicine.

Tightening the focus of subcellular snapshots: Combined approach yields better cell slices for cryoET imaging

Taking images of tiny structures within cells is tricky business. One technique, cryogenic electron tomography (cryoET), shoots electrons through a frozen sample. The images formed by the electrons that emerge allow researchers to reconstruct the internal architecture of a cell in 3D with near-atomic resolution.

The thing is, this method doesn’t work if a sample is too thick. Electrons can’t penetrate through most cells, including human cells. Instead, researchers use a beam of ions to mill these beefier cells down to 200 nanometer-thick slices, but then another challenge arises: ensuring the structure of interest—a ribosome or chloroplast, say—is actually contained in this thin slice. It often takes multiple tries to capture the target object in an eroded sample.

New technique improves milling accuracy

Now, by combining light microscopy with ion beam milling, researchers at the Department of Energy’s SLAC National Accelerator Laboratory have identified an additional signal in fluorescent light that can guide the milling process. This technique improves the accuracy of optically guided milling by roughly an order of magnitude, which will allow cryoET to more easily target small, rare structures within cells, including invading viruses.

The paper is published in the journal Nature Communications.

“Our approach tells you where your object is inside that final thin cell section very accurately and with a high success rate,” said Peter Dahlberg, an assistant professor at SLAC and Stanford University. “This improvement also greatly enhances efficiency, as you spend less time milling and missing your targets.” Dahlberg developed this time-saving technique with his research associates Anthony Sica and Magda Zaoralová.

How the tri-coincident system works

The team used what’s called a tri-coincident system, which aligns the focal planes of three different instruments: a scanning electron microscope, an ion beam for cell milling and an optical microscope for observing tiny objects tagged with fluorescent chemicals. Unlike commercial systems, which do not have co-aligned instruments, this system allows researchers to monitor fluorescence while milling.

Although tri-coincident systems can’t normally resolve objects smaller than 200 nanometers due to the fundamental optical limitations of these instruments, Dahlberg and his team turned to interference, a phenomenon of light wave interactions, to overcome this limitation.

As an ion beam erodes the top of a cell, fluorescent light from the object of interest shines up from underneath. The sample’s top surface reflects these light waves, which then turn around to interfere with the incoming light—much like ripples on a pond that cross and create more complex patterns. In this case, as the surface of the sample is eroded, interference causes the fluorescent light from tagged objects to dim and brighten. Sica wrote software that can precisely locate the fluorescent object based on these patterns of dimming and brightening, allowing more accurate milling.

Applications and future directions

To test out the approach, the team imaged a 26 nanometer-wide virus infecting a human cell, demonstrating that this approach can target biological structures previously off limits with cryoET. The technique could be applied to other viral particles and tiny, transient structures involved in cell division, as well as other objects.

“I want to show the field that using this effect and this tri-coincident geometry is the right way to target something very small in cells in their native state,” Dahlberg said.

Next, the team wants to incorporate advanced light microscopy methods into the tri-coincident system to further improve the optical image quality and take the most informative cryoET images possible.

“There’s a lot of research and development into milling, but I want to incorporate fancier, sophisticated fluorescence techniques,” Sica said.

Eye for trouble: Automated counting for chromosome issues under the microscope

Researchers from Tokyo Metropolitan University have developed a suite of algorithms to automate the counting of sister chromatid exchanges (SCE) in chromosomes under the microscope. Conventional analysis requires trained personnel and time, with variability between different people.

The work is published in the journal Scientific Reports.

The team’s machine-learning-based algorithm boasts an accuracy of 84% and gives a more objective measurement. This could be a game changer for diagnosing disorders tied to abnormal numbers of SCEs, like Bloom syndrome.

Understanding sister chromatid exchanges

DNA, the blueprint of life for all living organisms, is found packaged inside complex structures called chromosomes. When DNA is replicated, two identical strands known as sister chromatids, each carrying exactly the same genetic information, are formed.

Unlike in meiosis, sister chromatids do not need to undergo recombination during mitosis, and in most cases they are transmitted intact to the daughter cells.

However, when some form of damage occurs in DNA, the organism attempts to repair the lesion by using the remaining undamaged DNA as a template. During this repair process, it often happens that specific segments of the sister chromatids are exchanged with each other.

This “sister chromatid exchange” (SCE) is not harmful itself, but too many can be a good indicator of some serious disorders. Examples include Bloom syndrome: affected people can have a predisposition to cancer.

Challenges and advances in SCE counting

To count SCEs, normal methods involve experienced clinicians looking at stained chromosomes under the microscope, trying to identify the telltale “swapped” segments of sister chromatids. Not only is this labor intensive and slow, but it can also be subjective, dependent on how the human eye perceives features.

A fully automated analysis of microscope images would save time and give objective measures of the number of SCEs, for more consistent diagnoses across different clinical environments.

Now, a team led by Professors Kiyoshi Nishikawa and Kan Okubo from Tokyo Metropolitan University have developed a suite of algorithms using machine learning to count SCEs in images. They combined separate methods, one to identify individual chromosomes, another to tell whether there are SCEs, and finally, another to cluster and count them, giving an objective, fully automated measurement of the number of SCEs in a microscope image.

They found an accuracy of 84.1%, a level which is enough for practical applications. To see how it performed with real data, they collected images of chromosomes from cells with an artificially knocked out BLM gene, the kind of suppression seen in Bloom syndrome patients. The team’s algorithm was able to give counts for SCEs which were consistent with those given by human counters.

Work is currently under way to use the vast amounts of available clinical data to train the algorithm, with more refinements to come. The team believes that replacing manual counting with full automation will help realize faster, more objective clinical analysis than ever before, and that this is only the beginning for what AI can bring to medical research.

System can diagnose infections in 20 minutes, aiding fight against drug resistance

A new technique which slashes the time taken to diagnose microbial infections from days to minutes could help save lives and open up a new front in the battle against antibiotic resistance, researchers say.

Engineers and clinicians from the UK and China are behind the breakthrough system, called AutoEnricher. It combines microfluidic technology with sophisticated analysis and machine learning to enable the diagnosis of pathogens in just 20 minutes.

The team’s paper, titled “Rapid culture-free diagnosis of clinical pathogens via integrated microfluidic-Raman micro-spectroscopy,” is published in Nature Communications.

How AutoEnricher works and its impact

The researchers show how they validated the effectiveness of their system on hundreds of real patient samples, delivering diagnoses with 95% accuracy even in samples with very low concentrations of pathogens. They also demonstrate how AutoEnricher can diagnose multiple simultaneous infections.

In the future, the system could be a valuable tool to tackle antimicrobial resistance, a rapidly-accelerating global threat to human health which caused five million deaths in 2019 and is projected to kill 10 million people a year by 2050.

Dr. Jiabao Xu of the University of Glasgow’s James Watt School of Engineering is one of the paper’s first authors. She said, “One of the major drivers of antibiotic resistance is the misuse or overuse of drugs to treat infections. Currently, it can take days or even weeks to culture microbes taken from patient samples in the lab to enable diagnosis.

“That means doctors often have to act urgently and use antibiotics to treat patients suffering from life-threatening conditions like sepsis or pneumonia without knowing for sure if they actually have a bacterial infection.”

The University of Glasgow’s Professor Jon Cooper, a corresponding author, said, “AutoEnricher advances personalized medicine by compressing diagnostic timelines and enhancing antimicrobial decision-making. This new instrument will help enable doctors to match the right antibiotic to an infection at the right time, improving patient outcomes while reducing the potential for the emergence of antimicrobial resistance.”

Technical details and validation of the system

The team’s system combines innovative hardware and software to enable a rapid two-stage diagnosis. In the first stage, the system uses a microfluidic device developed by the team to scrub human cells from samples of patients’ blood, urine or spinal fluid, leaving behind only pathogen cells.

In the second stage, the unique chemical fingerprint of the pathogen cells is identified using a technique called Raman spectroscopy. The fingerprint is then analyzed by a machine learning tool developed by the team. The tool, which was trained on a database of 342 clinical isolates from 36 species of bacteria and fungi, can provide a diagnosis by analyzing as few as 10 pathogen cells in less than 20 minutes.

The team validated AutoEnricher’s performance with the help of three hospitals in China, who provided samples from a total of 305 patients. The samples were also tested using conventional lab methods to culture the bacteria to enable diagnosis.

AutoEnricher’s diagnosis matched the conventional lab method’s outcomes 95% of the time, and also managed to pick out mixed infections which were missed by the lab culture tests.

Professor Wei Huang of the University of Oxford, a co-investigator on the project, said, “These are really encouraging results from the largest study of its kind conducted on real patient samples. We’ve shown that this single-cell approach to diagnosis can rapidly deliver remarkably accurate results, and even pick out multiple infections which are much harder to spot using conventional lab culture methods.”

Professor Huabing Yin of the University of Glasgow, the senior author of the paper, said, “The next step is to apply AutoEnricher to a much larger cohort of patient samples in a proper clinical study. We’re already working on the first steps towards making that happen, and we hope that AutoEnricher will make a real difference in addressing the spread of antimicrobial resistance in the years to come.”

First-time use of AI for genetic circuit design demonstrated in a human cell line

There are hundreds of cell types in the human body, each with a specific role spelled out in their DNA. In theory, all it takes for cells to behave in desired ways—for example, getting them to produce a therapeutic molecule or assemble into a tissue graft—is the right DNA sequence. The problem is figuring out what DNA sequence codes for which behavior.

“There are many possible designs for any given function, and finding the right one can be like looking for a needle in a haystack,” said Rice University scientist Caleb Bashor, the senior author on a study published today in the journal Nature that reports a solution to this long-standing challenge in synthetic biology.

“We created a new technique that makes hundreds of thousands to millions of DNA designs all at once—more than ever before,” Bashor said.

How the CLASSIC technique works

The technique is called CLASSIC—an acronym for “combining long- and short-range sequencing to investigate genetic complexity.” It not only makes finding useful DNA designs (“genetic circuits”) much faster, but it also creates datasets of unprecedented scale and complexity—exactly what is needed for artificial intelligence and machine learning to be meaningfully deployed for genetic circuit analysis and design.

“Our work is the first demonstration you can use AI for designing these circuits,” said Bashor, who serves as deputy director for the Rice Synthetic Biology Institute.

CLASSIC could usher in a new generation of cell-based therapies, which entail the use of engineered cells as living drugs for treating cancer and other diseases. Demonstrating the approach in a human cell line also highlights its translational potential, since human cell-based therapies promise higher biological compatibility and the capacity to dynamically respond to changing disease states.

Kshitij Rai and Ronan O’Connell, co-first authors on the study, worked on the project during their time as doctoral students in the Bashor laboratory. Their work took four and a half years and involved “a lot of molecular cloning—cutting DNA into pieces and pasting it together in new ways.”

“We invented a way to do this in large batches, which allowed us to make really large sets—known as ‘libraries’—of circuits,” said Rai, now a recent doctoral alum about to embark on a postdoctoral fellowship at the University of Washington in Seattle.

The team also used two different types of next-generation sequencing (NGS) techniques known as long-read and short-read NGS. Long-read sequencing scans long stretches of DNA—thousands or even tens of thousands of bases—in one continuous pass. It is slow and noisy, but it captures the entire genetic circuit at once. Short-read sequencing reads only a few hundred bases at a time but does so with high accuracy and throughput.

“Most people do one or the other, but we found using both together unlocked our ability to build and test the libraries,” said O’Connell, who is now a postdoctoral researcher at Baylor College of Medicine.

Building and testing genetic circuits

The Rice team developed a library of proof-of-concept genetic circuits incorporating reporter genes designed to produce a glowing protein, then used long reads to record each circuit’s complete sequence. Each of these sequences was then tagged with a short, unique DNA barcode.

Next, the pooled library of gene circuits was inserted into human embryonic kidney cells, where they produced a measurable phenotype with some cells glowing brighter while others expressed a dimmer glow. The researchers then sorted the cells into several groups based on gene expression levels—essentially how bright (high expression) or dim (low expression) they were. Short-read sequencing was then used to scan the DNA barcodes in each group of cells, creating a master map linking every circuit’s complete genetic blueprint—its genotype—to its performance, or phenotype.

“We end up with measurements for a lot of the possible designs but not all of them, and that is where building the ML model comes in,” O’Connell said. “We use the data to train a model that can understand this landscape and predict things we were not able to generate data on, and then we kind of go back to the start: We have all of these predictions—let’s see if they’re correct.”

Rai said he and O’Connell first realized the platform worked when measurements derived via CLASSIC matched manual checks on a smaller, random set of variants in the design space.

“We started lining them up, and first one worked, then another, and then they just started hitting,” he said. “All 40 of them matched perfectly. That’s when we knew we had something.”

Implications for synthetic biology and AI

The long hours spent poring over bacterial plates, cell cultures and algorithms paid off.

“This was the first time AI/ML could be used to analyze circuits and make accurate predictions for untested ones, because up to this point, nobody could build libraries as large as ours,” Rai said.

By testing such vast numbers of complete circuits at once, CLASSIC gives scientists a clearer picture of the “rules” that determine how genetic parts behave in context. The data can train ML models to analyze, design and eventually predict new genetic programs for specific targeted functions.

The study shows that ML/AI models, if given the right amount of training data, are actually much more accurate at predicting the function of circuits than the physics-based models people have used up to this point. Moreover, the team also found that circuits do not have just one “right” solution, but many.

“This is akin to navigation apps: There are multiple routes to reach your destination, some highways, some backroads, but all get you to your destination,” O’Connell said.

Another takeaway was that medium-strength circuit components such as transcription factors, proteins that bind to specific DNA sequences and control gene expression, and promoters, short DNA segments upstream of a gene that act as its on/off switches, often outperformed strong or weak variants.

“Call it biology’s version of ‘Goldilocks zones’ where everything is just right,” Rai said.

The researchers say this combination of high-throughput circuit characterization and AI-driven understanding may be able to speed up synthetic biology and lead to faster development of biotechnology.

“We think AI/ML-driven design is the future of synthetic biology,” Bashor said. “As we collect more data using CLASSIC, we can train more complex models to make predictions for how to design even more sophisticated and useful cellular biotechnology.”

The collaborative effort and expert perspectives

Both Rai and O’Connell said the collaborative environment at Rice made the work really enjoyable despite setbacks. Also, working with teams from other universities was a key ingredient for success. This collaborative effort included Pankaj Mehta’s group in the Department of Physics at Boston University and Todd Treangen’s group in Rice’s computer science department.

“I think a big part of what we did in this project is to show that while the same individual parts might not have any spectacular function by themselves, if you put the right combination of things together, it gives you these dramatically better genetic circuits,” Rai said. “That holds true for science as well. And that was the best part of this project behind the scenes—all of us bringing our different skill sets together.”

To help put the achievement of this research project in context, James Collins, a biomedical engineer at the Massachusetts Institute of Technology who has helped establish synthetic biology as a field, pointed to early successes such as the genetic toggle switch—a synthetic circuit comprised of two genes designed to repress each other’s expression; and the repressilator—a circuit of at least three genes that form a negative feedback loop where each gene represses the next, as historic reference points for the discipline which has now, with the Rice scientists’ work, reached a new defining milestone.

“Twenty-five years ago, those early circuits showed that we could program living cells, but they were built one at a time, each requiring months of tuning,” said Collins, who was one of the inventors of the toggle switch. “Bashor and colleagues have now delivered a transformative leap: CLASSIC brings high-throughput engineering to gene circuit design, allowing exploration of combinatorial spaces that were previously out of reach. Their platform doesn’t just accelerate the design-build-test-learn cycle; it redefines its scale, marking a new era of data-driven synthetic biology.”

Michael Elowitz, whose foundational work in synthetic biology was recognized by a 2007 MacArthur Fellowship for his design of the repressilator, said that “synthetic biologists have dreamed of programming cells by snapping together biological circuits from interacting genes and proteins.

“However, there is a huge space of potential biological components and circuits, and this dream has remained out of reach in most cases. Bashor’s work demonstrates how we can systematically explore biological design space and make biological engineering more predictable. In the future, it will be exciting to generalize this approach to other interactions and components, bringing us closer to making cells fully programmable.”