AI tool unifies fragmented cell maps into spatial atlases across tissues

A new computational method could dramatically accelerate efforts to map the body’s cells in space, according to a study published in Nature Genetics. Spatial multi-omics technologies—often described as ultra-high-resolution maps of tissues—allow scientists to see not only which genes or proteins are active in a cell, but exactly where that activity occurs. That spatial context is critical for understanding complex organs such as the brain, immune tissues and developing embryos.

Unfortunately, capturing multiple molecular layers at once remains expensive and technically challenging, said David Gate, Ph.D., assistant professor in the Ken and Ruth Davee Department of Neurology’s Division of Behavioral Neurology, who was a co-author of the study.

“In practice, investigators end up with ‘mosaic’ datasets: different slices or batches that each capture only some of the layers, often from different technologies or labs, with batch effects and missing pieces,” said Gate, who also leads the Abrams Research Center on Neurogenomics.

A new tool to unify data

The new computational method, dubbed SpaMosaic, was designed to solve this growing problem. Developed by a collaborative team led by computational investigators, the tool uses artificial intelligence to align and integrate spatial datasets.

To create the new tool, investigators combined contrastive learning—which helps AI models learn meaningful similarities and differences across datasets—with graph neural networks that account for spatial relationships between neighboring cells. The result is a shared dataset that allows RNA, protein, chromatin accessibility and histone modification data to be analyzed together, even when individual datasets measure only a subset of these features.

Performance across tissues and species

In benchmarking experiments, SpaMosaic consistently outperformed existing integration methods on both simulated data and real-world datasets spanning mouse brain development, mouse embryos and human immune tissues such as lymph node and tonsil. The investigators found that the tool excelled at identifying biologically meaningful spatial domains—regions of tissue with shared functional identity—even when datasets came from different technologies or developmental stages.

“SpaMosaic is also effective at removing technical ‘batch effects’ (like differences in how samples were processed) while keeping the real biology intact,” Gate said.

Predicting missing molecular layers

One of SpaMosaic’s most novel capabilities is its ability to predict molecular layers that were never directly measured. In a large mosaic dataset of the mouse brain, the tool inferred histone modification patterns in regions where only transcriptomic data were available.

“SpaMosaic filled in the gaps and actually revealed stronger links between gene activity and epigenetic regulation than the directly measured chromatin data sometimes did,” Gate said.

Implications for atlases and neuroscience

The findings suggest the method can uncover regulatory relationships between molecular layers, offering an alternative to costly, technically demanding experiments. Instead of being limited by what a single experiment can measure, investigators can now combine data across studies, platforms and labs, Gate said.

“This is a real game-changer for building true multi-omics ‘atlases’ of tissues,” he said. “For neuroscience (our focus), this means better maps of brain development, neuroinflammation, and eventually disease states like Alzheimer’s or ALS, where spatial relationships and multi-layer regulation are critical. It accelerates discovery without requiring every lab to re-do perfect multi-modal experiments on every sample.”

Next steps for SpaMosaic development

The team is already exploring next steps, including scaling SpaMosaic to even larger datasets. Additionally, Gate and his collaborators will further test the method to assess how reliable the predicted data are, he said.

“This project is a great example of what happens when computational innovators and experimental biologists work closely together,” Gate said. “Tools like SpaMosaic are going to democratize spatial multi-omics, letting more labs contribute to and benefit from large-scale tissue atlases.”

Ultrasound waves rupture COVID-19 and flu viruses without damaging cells

Researchers at the University of São Paulo (USP) in Brazil have discovered that high-frequency ultrasound waves similar to those used in medical exams can eliminate viruses such as SARS-CoV-2 and H1N1 without damaging human cells. In an article published in Scientific Reports, they describe how the phenomenon, known as acoustic resonance, causes structural changes in viral particles until they rupture and become inactivated.

“It’s kind of like fighting the virus with a shout. In this study, we proved that the energy of sound waves causes morphological changes in viral particles until they explode, a phenomenon comparable to what happens with popcorn. By degrading the structure of the pathogen, the protective membrane of the virus called the envelope bursts and deforms, preventing the virus from invading human cells,” explains Odemir Martinez Bruno, a professor at the São Carlos Institute of Physics (IFSC) at USP who coordinated the study.

Ultrasound-mediated inactivation of enveloped viruses opens up a new treatment possibility for viral diseases. In fact, the team is already conducting in vitro tests against other infections, such as dengue, Chikungunya, and Zika. This alternative treatment is particularly interesting given that antiviral drugs are generally difficult to develop.

“Although it’s still far from clinical use, this is a promising strategy against enveloped viruses in general, since developing chemical antivirals is complex and yields difficult results. Furthermore, it’s a ‘green’ solution, as it generates no waste, causes no environmental impact, and doesn’t promote viral resistance,” says Flávio Protásio Veras, a professor at the Federal University of Alfenas (UNIFAL) and a FAPESP postdoctoral fellow.

The research brought together scientists from various fields. In addition to theoretical physicists and acousticians from the IFSC, the initiative benefited from the collaboration of specialists from the Virology Research Center and the Center for Research in Inflammatory Diseases (CRID), both affiliated with the Ribeirão Preto Medical School (FMRP-USP), the School of Pharmaceutical Sciences (FCFRP-USP), and the Faculty of Science and Technology at São Paulo State University (UNESP).

These specialists contributed structural and toxicological analyses using techniques such as microscopy and light scattering.

The initiative also benefited from the collaboration of Charles Rice, a professor at Rockefeller University in the United States and the 2020 Nobel Prize winner in medicine. Rice provided fluorescent viruses for real-time visualization.

It’s the geometry

The discovery surprised the researchers because it contradicts classical physics theories, as the wavelength of ultrasound is much longer than the size of the virus. In theory, this difference in size would prevent interaction.

“The phenomenon is entirely geometric. Spherical particles, such as many enveloped viruses, absorb ultrasound wave energy more effectively. It’s that accumulation of energy inside the particle that causes changes in the structure of the viral envelope until it ruptures. Therefore, if viruses were triangular or square, they wouldn’t undergo the same ‘popcorn effect’ of acoustic resonance,” Bruno explains.

He also points out that since the process depends strictly on the shape of the viral particle and not on genetic mutations, variants such as those observed during the pandemic (omicron and delta, for example) do not affect the effectiveness of the technique.

Frequency adjustment

“The technique isn’t intended for decontamination. That already exists. Ultrasound is already used to sterilize dental and surgical equipment, but it works through a different physical phenomenon called cavitation, which destroys biological material,” says Bruno.

He explains that acoustic resonance and cavitation differ mainly in the frequency used and their effects on viruses and cells. “While cavitation occurs at low frequencies and destroys both viruses and tissues through the collapse of gas bubbles, acoustic resonance operates at high frequencies of 3–20 MHz,” he notes.

Regarding acoustic resonance, Bruno explains that sound energy couples with the viral structure, exciting internal vibrations that lead to the mechanical rupture of the viral envelope without altering the temperature or pH of the medium. “The result is a selective and safe mechanism since only the virus absorbs the energy and is destabilized, posing no risk to human cells,” he adds.

Another article published in the Brazilian Journal of Physics describes the theoretical basis behind the phenomenon of popping enveloped viruses like popcorn.

OpenBind’s first data and model release marks a milestone for AI enabled drug discovery

The UK-led OpenBind initiative has reached a major milestone with the release of its first publicly available dataset and predictive AI model, a groundbreaking step toward accelerating the discovery of new medicines using artificial intelligence.

The release showcases how engineering the production of AI-ready data is not only feasible but essential to evolving AI tools for scientific fields, which all suffer from a lack of data. With this OpenBind release, both high-quality, standardized experimental data, and a newly trained predictive model, OpenBind v1, become freely accessible to researchers worldwide, for immediate use in therapeutic discovery and to drive the next generation of AI models.

While AI has introduced a step-change in predictive accuracy for protein structures, its impact on drug discovery has remained muted, limited above all by the global shortage of reliable experimental data measuring in atomic detail how molecules of drug discovery bind to disease-related proteins. OpenBind aims to fill this critical gap.

Led by Diamond Light Source, the collaboration of structural biologists and AI specialists—supported in its foundation phase by the Department for Science, Innovation and Technology (DSIT)—is the first initiative to generate these essential datasets at an industrial scale, openly and continuously, and designed specifically for AI.

This first release demonstrates that OpenBind’s pipeline is now operational, having generated 800 high-quality measurements in only seven months—in the past, such large datasets took years to be produced and released.

This integrated operation combines automated chemistry, robust binding measurements and high throughput crystallography at Diamond’s XChem Fragment Screening facility with an engineered data release process and AI model training using the UK’s Isambard-AI compute cluster.

It lays the groundwork for transformative progress in drug discovery, with future data tranches planned to address global-health challenges such as COVID-19, malaria, dengue, Zika, and cancer, where rapid development of new treatments remains vital.

Professor Mohammed Alquraishi of Columbia University said, “AlphaFold2 revolutionized protein structure prediction by leveraging decades of experimental data on protein structures in the PDB. The equivalent of such a dataset for protein-drug complexes does not yet exist, but OpenBind aims to create it, and in the process create the next generation of computational tools for modeling interactions between drugs and proteins.”

The initial dataset also reflects invaluable learning from the initiative’s early experimental cycles. Standardized workflows, strong metadata practices and high levels of automation have proven crucial in ensuring the consistency and reproducibility required for AI, while highlighting opportunities to further streamline data handling and release frequency.

Dr. Fergus Imrie of the University of Oxford said, “High-quality experimental data is essential for developing new and improved AI models, and this first data release shows that OpenBind now has this foundation in place. We’re enabling AI to improve model performance and guide future experiments, helping to accelerate discovery.

“The lessons from these early cycles are already helping us improve the speed, consistency, and reproducibility of the pipeline, which will be critical as OpenBind grows.”

Professor Frank von Delft, principal beamline scientist at Diamond Light Source, said, “We couldn’t have made such rapid progress without the contributions of our consortium members and operational team. Their expertise and commitment have enabled us to reach this ambitious milestone. We will now implement the lessons from this foundation phase to ramp up a long-term operation that links high-volume production of AI data with active discovery projects.”

Building on this foundation, OpenBind will expand to include many more targets, larger chemical series and deeper datasets, alongside community-blind challenges that will validate AI models for newly generated experimental data. Ultimately, OpenBind aims to create a global, open data engine capable of supporting the development of faster, more accurate and more equitable therapeutics.

With large DNA fragment assembly, scientists can design microbes that produce countless complex products

A review in Quantitative Biology demonstrates that scientists can now reliably build and combine very large pieces of DNA, making it much easier to redesign microbes such as yeast and bacteria to act as efficient “cell factories.” With these advances, whole biological pathways, and even extra chromosomes, can be assembled and inserted into cells, allowing microbes to produce complex products like medicines, fuels, and chemicals more efficiently than before.

The review highlights recent progress and makes clear that the field has reached a turning point. The ability to assemble large DNA segments quickly and accurately opens possibilities with relevance for health care, sustainable manufacturing, agriculture, and industrial biotechnology.

The methods described are relevant to ongoing global debates about how to reduce reliance on fossil fuel-based production, improve the sustainability of manufacturing, and scale up biotechnological solutions safely.

“As large DNA assembly technologies increasingly integrate with automated platforms and AI-driven design, the development cycle of microbial cell factories is poised to accelerate dramatically,” said corresponding author Yue Shen, Ph.D., Chief Scientist of Synthetic Biology of BGI Research, in China. “This technological leap is unlocking their true potential as practical, sustainable platforms for global biomanufacturing.”

Light without electricity? Glowing algae could make it possible

Imagine a sea of glowing blue lights pulsing to the beat of the music. But instead of glow sticks filled with toxic chemicals, the luminescence comes from living algae, shimmering on demand. In a new study published in Science Advances, researchers at the University of Colorado Boulder and collaborators unveil a new technology that could make it possible.

They’ve successfully turned on the “light switch” in algae and kept them lit up using simple chemical solutions. The finding opens the door for future technologies such as autonomous robots that can operate in dark environments and living sensors for water quality.

“This project was a moonshot idea,” said Wil Srubar, professor in the Department of Civil, Environmental and Architectural Engineering. “I was curious if we could create a world in which we don’t use electricity but rather use biology to produce light. This discovery really paves the way for engineering other living light materials and devices.”

The science of natural bioluminescence

In the natural world, a wide range of animals, from fireflies to anglerfish and even certain mushrooms, produce their own light, a phenomenon known as bioluminescence. In the deep ocean, as much as 90% of creatures may be able to glow and glitter through chemical reactions inside their cells.

Pyrocystis lunula, a type of bioluminescent algae, is one of the organisms that emit an icy blue glow sometimes seen in ocean waves. Subsisting only on seawater, sunlight, and carbon dioxide (CO2), these photosynthetic organisms flash when they are agitated by crashing tides or passing boats, for example.

But those flashes last only milliseconds. Srubar and his team wondered if they could keep the lights on with chemistry instead. Previous research has suggested that exposure to different chemical compounds could activate P. lunula’s bioluminescent reaction.

Using chemistry to sustain the glow

So the team exposed the algae to an acidic solution with a pH of 4, similar to that of tomato juice, and a basic solution with a pH of 10, comparable to mild soap.

They found that both environments could trigger light production in P. lunula. In the acidic condition, the algae could stay aglow for as long as 25 minutes, with light appearing bright and concentrated. In the basic condition, the glow was more diffused and short-lived.

“It was a very exciting moment when we found the right chemical stimulant that allowed the light to stay on for a long time,” says Giulia Brachi, the first author and research associate in the Department of Civil, Environmental and Architectural Engineering. “This is the first time we have figured out how to sustain luminescence.”

3D-printing living light structures

To turn these glowing algae into usable materials, the researchers embedded them into a naturally derived hydrogel, a type of water-based gel material. They then used 3D printing to shape the material into structures and shapes, from a crescent pattern to a CU Buffalo logo.

By exposing the structures to the acidic or basic solution, they prompted the P. lunula inside to emit light, illuminating the entire structure in a blue glow.

Inside these printed structures, the algae remained alive for weeks. The acidic condition worked best, with P. lunula in these 3D-printed structures retaining 75% of their brightness even after four weeks.

Potential applications and environmental impact

The findings could have wide applications beyond making eye-catching designs. These living materials could someday help light up autonomous robots for deep-sea or space exploration without the need for batteries.

Next, the team is exploring whether P. lunula may respond to other chemicals. If so, they could also serve as a tool for water quality monitoring and light up when toxins are present.

Beyond their ability to light up spaces, P. lunula also offers an environmental benefit. Because these algae are photosynthetic, they convert carbon dissolved in seawater into energy.

“We’re storing carbon while we’re producing light, whereas conventionally, we emit carbon to light up spaces,” Srubar said.

And yes, future rave scenes could someday glow with light powered by living algae.

DNA-guided CRISPR flips gene editing script, opening a new path for precise diagnosis and antivirals

A research team led by Prof. Hsing I-Ming, Professor of the Department of Chemical and Biological Engineering (CBE) at The Hong Kong University of Science and Technology (HKUST), in collaboration with Prof. Zhai Yuanliang, Associate Professor of the Division of Life Science (LIFS), has successfully developed the world’s first DNA-guided CRISPR-Cas system capable of programmable RNA targeting and cleavage.

This breakthrough overturns the conventional CRISPR paradigm, which uses RNA as a guide to target DNA. The new system holds tremendous potential for clinical applications, opening new avenues for RNA-targeted therapies and diagnostics, including improved accuracy in rapid infectious disease testing and the advancement of antiviral treatments.

The findings have been published in Nature Biotechnology.

A simple analogy: Reprogramming the GPS navigation system

The operation of the CRISPR Cas system can be understood using the analogy of a Global Positioning System (GPS). Prof. Hsing I-Ming explained, “The RNA guide molecule is like the address you type in, and the Cas protein is the car that drives to that address—the DNA target. Traditional detection platforms, including SHERLOCK and DETECTR, are all based on this principle.”

The HKUST team has proposed a new approach. By combining a newly developed DNA-guided Cas12a system with isothermal amplification, they constructed a revolutionary diagnostic platform called SLEUTH (Specific Locus Evaluation Utilizing Targeted Hydrolysis), successfully inverting the traditional method.

Through engineering, the team designed a synthetic “CRISPR DNA” (crDNA) molecule that reprograms the Cas12a protein to use DNA as a guide, directing the Cas protein to target different RNA molecules. This paradigm-shifting innovation opens up an entirely new design space for programmable RNA tools.

Key breakthrough: Decoupling the ‘instruction’ from the ‘activation’
The key to this breakthrough lies in a clever structural insight. The research team decoupled two functions that are normally combined in the natural CRISPR system: the “activation” signal (the PAM sequence) and the “information-carrying” address.

By designing a short DNA strand that mimics the PAM-containing duplex, they created a functional deoxyribonucleoprotein complex capable of recognizing and cleaving any selected RNA target.

To validate the design, the team combined three advanced technologies: AlphaFold-guided modeling, molecular dynamics simulations, and high-resolution cryo-electron microscopy (cryo-EM). The experimental cryo-EM structure determined by Prof. Zhai Yuanliang and Dr. Lam Wai-Hei, co-first author of the study and postdoctoral fellow in the LIFS, closely matched the computational predictions, thereby confirming the feasibility of this artificial activation pathway.

“It is thrilling to collaborate with the innovative engineers on Prof. Hsing’s team,” said Prof. Zhai. “Observing the synthetic DNA guide interacting with Cas12a at atomic detail was incredibly exciting. This clearly demonstrates how AI-driven design and structural biology can work together synergistically.”

Advantages of DNA guidance: More stable, more precise, safer, and lower cost

Compared to existing RNA-based CRISPR diagnostics (such as SHERLOCK) and RNA-interference (RNAi) tools, the new DNA-guided system offers several practical advantages:

  • More stable, no cold chain required: Synthetic DNA has significantly greater chemical stability than RNA and does not require specialized cold-chain storage. RNA guide molecules degrade easily at room temperature and must be frozen, whereas DNA guide molecules are far more stable at ambient temperatures. RNA-guided methods generally require cold-chain storage and handling, while the DNA-guided method has no such limitation, greatly reducing synthesis costs and supply chain complexity.
  • Lower cost: Synthetic DNA is significantly cheaper to manufacture than RNA. While the team has not conducted a formal cost analysis, the industry principle is that RNA synthesis requires additional chemical protection steps, and RNA guides typically need cold-chain handling, adding logistical expense.
  • Greater precision: The system can discriminate single-nucleotide differences in the target RNA sequence—a level of accuracy that RNAi typically cannot achieve.
  • Wider applicability: RNAi is largely limited to silencing protein-coding mRNA. This new system can target any RNA molecule, including non-coding RNAs (microRNAs, and long non-coding RNAs), which are key regulators of gene expression and disease.
  • Safer for therapy: Compared to existing RNA-targeting CRISPR tools such as Cas13, the new Cas12a-based system significantly less off-target RNA cleavage in cells—meaning fewer collateral effects—a critical safety consideration for future therapeutic development.

The team has successfully validated the SLEUTH platform’s exceptional detection sensitivity using 31 clinical samples of SARS-CoV-2. Results showed attomolar-level sensitivity for both RNA and DNA targets across various conditions. The DNA-guided method is particularly well-suited for point-of-care deployment in clinics, airports, and resource-limited settings without cold-chain storage requirements.

Concrete disease relevance: From COVID-19 to future pandemics

“Hong Kong and the broader region have been repeatedly affected by viral pathogens—from SARS and influenza to COVID-19,” noted Prof. Hsing. Many of these viruses carry RNA genomes or rely on RNA intermediates to replicate. A DNA-guided CRISPR tool capable of precisely cleaving those RNA molecules could form the basis of a new class of antiviral intervention.

Mr. Wu Xiaolong, a Ph.D. student in the Department of Chemical and Biological Engineering and co-first author of the study, said, “Compared with traditional crRNA-based systems, our DNA-guided design replaces RNA with a more stable DNA guide, demonstrating a mechanism never before seen in natural CRISPR systems. We look forward to extending this concept to more RNA-based diagnostics and therapeutics.”

Patents and future development

HKUST has filed two U.S. provisional patents for this innovative technology and is actively exploring its applications in RNA diagnostic testing, antiviral therapies, live-cell RNA imaging, and programmable RNA transcript regulation.

Over the next three years, the team plans to expand the SLEUTH platform to detect other respiratory viruses and explore its potential in liquid biopsy applications to identify circulating RNA biomarkers in cancer.

This work aligns closely with HKUST’s newly established School of Medicine and the University’s growing emphasis on translational medicine and RNA-based therapies.

DNA-reading AI reconstructs ancestry in minutes, matching top statistical methods

Researchers at the University of Oregon have developed an artificial intelligence tool that can read genetic code the way large language models like ChatGPT read text. Scanning the genome for biological mutation patterns, the computer model traces pairs of genes back in time to their last common ancestor.

It’s the first language model designed for population genetics, said Andrew Kern, a computational biologist in the UO College of Arts and Sciences. As described in a paper published April 10 in the Proceedings of the National Academy of Sciences, the AI tool offers scientists a fast and flexible alternative to classical methods for reconstructing evolutionary history.

In practice, it can help researchers like Kern understand when disease-resistance genes emerged in a population, for example, or when species evolved key traits.

“Advances in generative AI and the architectures behind them are potentially useful to a number of fields outside a chatbot,” said Kern, an Evergreen professor of biology. “We’re borrowing strengths from the world of AI and applying them in this different context that’s largely been untapped.”

Training AI on the language of DNA

Genomes are often compared to a written language, with combinations of DNA’s four-letter alphabet—A, T, C and G—forming the basis for genes and chromosomes. Kern and his lab are most interested in what’s misspelled, which scientists call mutations: changes in DNA sequences, like swapped or missing letters, that accumulate over time as part of evolution.

Often harmless, mutations can be passed down from generation to generation, leaving a trail of breadcrumbs for tracing ancestral relationships.

Traditional methods based on math and statistics are the gold standard for translating mutations into ancestry. They’re difficult to beat in most cases, said Kevin Korfmann, lead author of the study and former postdoctoral researcher at the UO. But those classical probabilistic approaches can be slow and struggle with large or incomplete genomic datasets, he added.

So, the researchers looked to AI to efficiently interpret the language of life by modifying a GPT-2 model, the older machine learning architecture behind ChatGPT. But instead of being trained on large volumes of English text, the language model was trained on simulations of genetic evolution across different species—including bacteria, rodents, mosquitoes and primates—to learn and recognize mutation patterns.

“We can’t repeat evolution, so one of the key workflows we have is developing simulations,” Korfmann said. “The simulations mimic evolutionary processes, and then we use the outcomes as training data for our deep learning models.”

In general, stretches of DNA with many mutations likely trace back to a distant common ancestor, whereas those with few mutations are likely to share a more recent ancestor. This helps explain why chimpanzees are considered humans’ closest living relatives, with similar DNA, while sea sponges are the most distant, having diverged genetically more than 700 million years ago.

Based on those mutation patterns and other biological principles, the AI model can predict when gene pairs last shared a common ancestor, known as the “coalescence time.”

Sidestepping data bottlenecks

In tests, the tool performed as well as state-of-the-art statistical methods, which was surprising to the research team.

“You never really know what’s going to work when you’re essentially borrowing techniques from a totally different world and applying them to a new problem,” Kern said. “But this was a case where things worked really well.”

The computer model was also dramatically faster. While traditional methods can take hours or even days to decode a single mosquito chromosome, the new approach can do it in minutes. That efficiency is especially beneficial for scientists handling large amounts of genetic sequence data.

“Compared to classical inferential approaches, the AI tool doesn’t have to reason about every mutation individually,” Korfmann said. “It just reads the patterns because all of the expensive statistical work was done up front, during training, which sidesteps the bottleneck.”

The model’s simulation-based training also enables scientists to use DNA datasets that are incomplete or missing genetic code—an issue Kern frequently faces when working with mosquito genetic databases for his research on malaria transmission.

That versatility comes at a crucial moment for malaria control, Kern said. For decades, insecticides have been a cornerstone for the prevention of malaria-spreading mosquitoes. But evolution, as Kern puts it, “did its thing.”

“Insecticide resistance is being observed in all of these mosquito populations today,” he said. “A major challenge in preventing the spread of malaria has been understanding the evolution of insecticide resistance. Now, we can go in with our AI model, ask how long ago these resistance genes arose in the population, and learn about the evolutionary history of this critical carrier of malaria.”

Looking ahead, Kern and Korfmann aim to advance the biological model beyond tracing shared ancestry between two lineages towards reconstructing full genealogical trees across multiple lineages. Some traditional methods can already do this, but Kern said they’d like to chase that goal from a machine-learning angle.

“There’s so much going on in the machine learning field that we haven’t applied yet in our field,” Korfmann said. “There’s tons of translational work to do to get these novel algorithms working in biology.”

Seaweed integration boosts efficiency and cuts waste in aquaculture, study finds

A new study found that cultivating seaweed species alongside marine finfish in integrated multi-trophic aquaculture (IMTA) operations, where seaweeds receive nutrient-rich effluent from fish production, can significantly reduce—and even eliminate—key waste products from marine finfish farming.

The study titled “Evaluation of native macroalgae species of the Southeast U.S. and Caribbean for use in integrated multi-trophic aquaculture (IMTA)” was published in the journal Aquaculture International.

Led by scientists at the University of Miami Rosenstiel School of Marine, Atmospheric, and Earth Science, the study offers new insights into how aquaculture producers can improve sustainability by farming macroalgae species in a complementary system alongside finfish.

“With significant interest in the development of marine aquaculture throughout the Southeast U.S. and Caribbean, these findings can be used to guide the selection of extractive macroalgae species in operations culturing marine finfish,” said study lead author Haley Lasco, a marine biology graduate student at the Rosenstiel School and currently a scientist at the South Carolina Department of Natural Resources.

To conduct the study, the researchers established a pilot-scale Integrated Multi-Trophic Aquaculture (IMTA) system at the Rosenstiel School’s Experimental Hatchery facility on Virginia Key, Florida, to evaluate the performance of four candidate macroalgae (seaweed) species under consistent marine finfish effluent conditions.

The flow-through IMTA system used a consistent source of nutrient-rich effluent from a yellowtail snapper (Ocyurus chrysurus) grow-out tank maintained at commercial-scale density and feeding rates. Each macroalgae species was grown in three replicate tanks receiving the same effluent, enabling controlled comparisons of nutrient removal, nutritional composition, and market potential under conditions representative of commercial aquaculture.

At the end of each two-week trial, macroalgae were evaluated for growth and analyzed for protein, fat, fiber, ash, minerals, metals, and carbon and nitrogen content, including stable isotope ratios.

Results provide new insights into macroalgae performance under real-world conditions and demonstrate the potential to reduce total ammonia nitrogen (TAN) in marine finfish aquaculture effluent to below detectable levels.

“This work shows how integrating macroalgae into marine finfish aquaculture systems can reduce waste while producing a valuable secondary crop,” said John D. Stieglitz, Ph.D., a research associate professor in the Department of Marine Biology and Ecology, who led the project as principal investigator.

“It provides a practical framework for selecting species based on specific production goals, improving environmental performance while creating opportunities for better production economics and more diversified products using an IMTA approach.”

IMTA is a production system where different species from different trophic levels are farmed together in a complementary system with a goal to mimic natural ecosystems, thus improving sustainability, reducing waste, and increasing overall productivity.

This form of aquaculture allows for the waste of one organism to be utilized by another organism across trophic levels, creating a system with less waste and therefore a lower environmental impact.

The primary aim of this study was to provide an understanding of which macroalgae species from the Southeast U.S. and Caribbean regions perform the best in these different categories, providing stakeholders with a guide to select a desirable species of macroalgae to utilize and implement in their operations.

The results demonstrate the potential of IMTA in these regions and offer potential mitigation solutions for many of the most prominent sustainability concerns regarding the development of marine aquaculture operations for fed species such as marine finfish.

“Our findings support more sustainable aquaculture operations and help producers make smarter choices about macroalgae for IMTA,” said Lasco.

Novel wheat hybrids increase resistance to major fungal disease by up to 70%

A new experimental study has identified a novel genetic locus in a common agricultural weed, Elymus repens, that provides significant resistance to the destructive fungal disease Fusarium Head Blight (FHB) and has now been successfully transferred into wheat to produce FHB resistant hybrids.

FHB is a virulent fungal disease that poses a serious threat to global food security and is regarded as one of the world’s most economically harmful cereal diseases. FHB reduces grain yield and produces mycotoxins that cause gastrointestinal issues in humans and livestock, requiring infected crops to be destroyed.

E. repens, more widely known as coach grass or common coach, is a wild relative of cultivated wheat, allowing for the two species to breed together and create genetic hybrids.

“Both research and breeding practice have shown that developing and deploying resistant wheat cultivars is the fundamental solution to FHB,” says study author, Fei Wang. “However, current efforts are limited by a scarcity of major resistance sources, narrow genetic backgrounds and inefficient use of resistance genes.”

Transferring resistance from wild relatives

Dr. Yinghui Li and Houyang Kang’s research team’s new study, published in the Journal of Experimental Botany, outlines how they successfully hybridized E. repens and cultivated wheat to transfer FHB-resistant genes from E. repens into the wheat.

When testing for the presence of FHB from deliberately infected plants, hybrid genotypes containing the resistance genes, labeled as 1StL, showed a 69% reduction in diseased plant spikelets under greenhouse conditions compared to the control wheat, and a 60% reduction under field conditions.

A newly named resistance locus

The researchers found no presence of genetic markers from previously identified alien FHB resistance genes in the hybrids, indicating that 1StL carries a novel resistance locus, which the team has named Fhb.Er‑1StL.

Notably, this is the third resistance locus that Dr. Yinghui Li and Houyang Kang’s group has identified from Elymus repens, following their earlier discoveries of QFhb.Er‑7StL and Fhb.Er‑3StS. The new locus represents an additional, valuable source of resistance that can now be used in wheat breeding.

“We believe this work is of practical importance for accelerating the breeding of resistant, high-yielding wheat varieties and breaking the bottleneck in FHB resistance breeding,” says Dr. Yinghui Li.

This study was conducted by researchers from State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China, Sichuan and Agricultural University, Chengdu, China.

Synthetic biology promised to rewrite life—with the death of its pioneer, J. Craig Venter, how close are scientists?

When scientist J. Craig Venter and his team announced in 2010 that they had created the first cell controlled by a fully synthetic genome, it marked a turning point in how scientists think about life.

For the first time, DNA—the molecule that carries the instructions for life—had been written on a computer, assembled in a laboratory and used to control a living cell. The achievement suggested something profound: Life might not only be understood but designed.

A biologist widely recognized for his groundbreaking contributions to genomics, including leading efforts to sequence the first draft of the human genome, Venter and his team’s successful creation of the first synthetic bacterial cell is considered pivotal to the field of synthetic biology.

By combining biology and engineering, synthetic biology seeks to design and build new biological systems or redesign existing ones for useful purposes. Rather than only observing how life works, scientists use tools such as DNA synthesis and genetic engineering to “program” cells to perform specific tasks, such as producing vaccines, developing sustainable fuels or detecting environmental toxins.

But how far has the field gone since Venter’s original synthetic bacterial cell?

As a biochemist who uses genomics in my teaching and research, I am interested in understanding what this shift in biology means and how far it has actually taken scientific innovation. Following Venter’s death on April 29, 2026, it is worth revisiting that moment and asking whether synthetic biology has delivered on its promise.

What is synthetic biology?

For much of the 20th century, biology focused on decoding life.

The discovery of DNA’s structure in 1953 revealed how genetic information is stored. Decades later, the Human Genome Project that Venter helped accelerate mapped the full set of human genes.

But Venter and others pushed the field further: If DNA could be read like code, could it also be written?

This idea underpins synthetic biology, which aims to design and construct biological systems rather than simply study them. Instead of modifying one gene at a time, researchers began exploring whether entire genomes could be built and inserted into cells.

In 2010, Venter’s team demonstrated that this was possible. They constructed a bacterial genome and used it to take control of a living cell. While the cell itself was not built entirely from scratch, their work showed that the instructions for life could be engineered.

In other words, synthetic biologists were moving from reading life to rewriting it entirely.

Big promises and bold expectations

Synthetic biology has already led to a range of promising outcomes across medicine, energy and environmental science.

Researchers have engineered microbes to produce lifesaving drugs such as artemisinin, an antimalarial compound, and to manufacture sustainable biofuels that could reduce reliance on fossil fuels. In addition, researchers are using synthetic biology to design organisms capable of detecting and breaking down environmental pollutants, offering new tools for bioremediation.

At the heart of these ideas was a powerful analogy: If biology could be treated like software, then designing organisms might one day resemble writing code.

This vision attracted significant investment and policy attention. The U.S. Government Accountability Office has highlighted synthetic biology’s potential to address challenges in multiple industries while also raising important ethical and safety considerations. For example, synthetic biology techniques could be used to develop biological weapons and could unintentionally harm ecosystems and human health.

Progress slower than expected

Despite this progress, synthetic biology has not fully realized its early ambitions. One major reason is the complexity of living systems.

Early approaches to synthetic biology treated cells as modular systems, where components could be predictably exchanged. In practice, biological systems are highly interconnected. Gene interactions are difficult to predict, and results observed in controlled laboratory conditions do not always scale to real-world environments.

This challenge has been particularly evident in areas such as biofuels, where translating laboratory successes into industrial-scale production has proved difficult.

There are also more fundamental limitations. Scientists still cannot construct a fully living organism from nonliving components alone. Even Venter’s synthetic cell depended on an existing biological system to function.

As a result, the goal of creating life entirely from scratch remains out of reach for now.

New questions and emerging risks

As technology has advanced, it has also raised new ethical and security concerns. The same tools used to design beneficial organisms could potentially be misused.

Synthetic biology is widely recognized as a dual-use field, where advances in gene editing, DNA synthesis and bioengineering may enable not only medical and environmental innovations but also the creation or modification of harmful organisms.

The increasing accessibility of these technologies further lowers barriers to misuse, making biosecurity threats more distributed and difficult to control. At the same time, governance frameworks often struggle to keep pace with rapid technological developments, leaving gaps in oversight and international coordination.

Beyond immediate risks, broader questions remain about how far humans should go in redesigning life and what unintended consequences such changes could have for ecosystems. Engineered organisms may introduce risks such as genetic contamination and ecosystem disruption, which would harm biodiversity and ecosystem services.

These concerns are likely to become more pressing as the technology behind synthetic biology continues to develop, particularly as emerging tools such as artificial intelligence accelerate the design of new biological systems.

Venter’s legacy

The implications of the idea that life could be engineered rather than just observed is still unfolding.

Synthetic biology has not yet delivered a world of fully programmable organisms solving global challenges. But it has changed expectations, both within science and beyond, about what might be possible in biological design.

In that sense, the impact of synthetic biology is already clear: It has altered not just how scientists study life but how society imagines its future.

Venter’s legacy includes the questions he made unavoidable: How far scientists should go in designing life, who gets to decide, and what responsibilities come with that power? The answers remain unsettled. But the trajectory seems to be that science is learning, cautiously and imperfectly, to author life.