Black soldier fly larvae show promise for safe organic waste removal

People and animals create lots of waste that is usually sent to landfills, incinerated, or stored in engineered ponds such as manure lagoons. Now, researchers report a potential removal method using insects, specifically black soldier fly larvae. In experiments, the larvae ate spoiled food, sewage sludge, or livestock manure, and removed most human-pathogenic viruses. The researchers say this demonstrates a step toward simple, environmentally friendly waste management.

The research is published in Environmental Science & Technology Letters.

“Viruses in organic wastes have rarely been studied in a systematic way, but our research shows that black soldier fly larvae can help reduce potential viral risks, highlighting the promise of this approach for future waste treatment,” says Gang Luo, a corresponding author of the study.

Zhijian Shi, Luo, and colleagues wanted to see how well black soldier fly larvae break down RNA viruses in three organic waste streams, or if viral material persists in their bodies or appears in their frass (tiny, nutrient-rich pellets larvae excrete). The researchers fed separate groups of black soldier fly larvae food waste, sewage sludge, or pig manure.

After eight days, all the larvae gained weight, with those that ate food waste growing the most, followed by those fed manure and those fed sewage.

When the team members assessed the three waste streams, they found that the initial feedstocks contained a diverse array of RNA viruses that could infect living things such as bacteria, fungi, plants, and animals, even humans.

Larvae that consumed food waste contained low amounts of insect-specific viruses, which the researchers consider to be of minimal ecological or human infection risk.

In contrast, larvae that were fed sewage sludge or pig manure had higher viral diversity, and their frass contained RNA viruses that could infect humans. Although larval digestion significantly decreased the abundance of most human-pathogenic viruses (e.g., noroviruses) from the fecal organic matter sources to frass, some viruses (e.g., picobirnaviruses that can cause digestive symptoms) persisted in both the final larvae and frass.

The researchers conclude that black soldier fly larvae are a promising simple and natural approach for waste management, but larvae consuming fecal waste may need additional treatment for safe use in feed or for their frass to be used in fertilizers. Future research will focus on whether viruses remaining in the larvae or their frass are still active. Gang Luo says this is “key to safely reusing them in a circular waste management system.”

Thermogenetics: How proteins are controllable by heat

Protein activity can be precisely regulated via subtle changes in temperature using heat-sensitive switches. Underlying this capability is a novel modular design strategy developed by researchers at the Institute of Pharmacy and Molecular Biotechnology of Heidelberg University. The strategy allows the integration of sensory domains in various proteins regardless of function or spatial structure.

A new direction for thermogenetics

This new approach in the field of thermogenetics is broadly applicable and opens up new possibilities for precise, non-invasive control of different cellular processes. It was developed by a research team led by Prof. Dr. Dominik Niopek and Dr. Jan Mathony and is published in Nature Chemical Biology

Proteins are the molecular machines of the cell. They regulate nearly all vital processes and their responses are highly dynamic. To better understand these processes and their chronological sequence, scientists need tools that can be used to change individual parameters precisely and in a controlled manner. The most suitable proteins are those that can be turned on and off like technical devices. Especially attractive in this context are heat-sensitive protein switches that tightly regulate the temperature spatiotemporally and are able to deeply penetrate tissue or complex biological samples as a signal.

Engineering precise allosteric thermoswitches

Until now, temperature-dependent control of proteins was considered technically difficult and highly limited. Most available methods allow only indirect control by means of gene expression. The Heidelberg researchers solved this problem. They integrated optimized variants of a plant sensory domain into natural proteins, in order to develop so-called allosteric thermoswitches. These switches respond with high precision to minimal changes in temperature within the physiological temperature window of human cells, which is between 37°C and 40°C. Thus, protein activity can be altered to tightly control cellular functions.

“Our goal was to make temperature usable as a versatile stimulus for protein control,” explains Ann-Sophie Kröll, a doctoral candidate in Dr. Mathony’s team. To verify feasibility, the new approach was first tested and refined using the Escherichia coli bacterium. Then the researchers transferred their strategy to mammal cells and engineered temperature-controllable CRISPR-Cas gene editors, whose activity can be finely regulated.

“Using these allosteric thermoswitches, we are able to directly and reversibly control cellular functions without actively intervening in other processes of the cell,” explains Prof. Niopek, who heads the Pharmaceutical Biology department at the Institute of Pharmacy and Molecular Biotechnology.

A modular blueprint for future applications

One major characteristic of this new approach is its high modularity. In addition to the sensory domain originally used, the researchers were also able to integrate an alternative receptor module into proteins that likewise responds to temperature changes. The modular design strategy of the Heidelberg researchers thus offers a general blueprint for engineering temperature-controlled protein switches. They can be integrated independently of the function or spatial structure of the given proteins and open up new possibilities for the precise control of various cellular processes without the need to intervene in the cell, thus enabling non-invasive control.

“We want to further develop thermogenetics into a comprehensive and broadly applicable technology that can be used in the future to precisely regulate nearly every protein solely through heat. We are now at the threshold of the possible in this field,” states junior research group leader Mathony.

According to Prof. Niopek, “They also promise great potential for future biomedical applications.”

A new face for ‘Little Foot,’ the most complete Australopithecus skeleton to date

What did the face of our ancestors look like three million years ago? Our international team has answered this question by virtually reconstructing the facial fragments of Little Foot, the most complete Australopithecus skeleton yet discovered. This reconstruction sheds light on the influence of the environment on how our face evolved. Our findings have just been published in the Comptes Rendus Palevol journal, and the new 3D face of Little Foot can be explored online on the MorphoSource platform.

The search for human origins has never been more fruitful, with fossil discoveries pushing back the appearance of the earliest humans (members of the genus Homo) to 2.8 million years ago, and the development of cutting-edge methods for analyzing these remains, such as recovering genetic information from fossils over two million years old.

Yet, while our knowledge of extinct human species grows with each discovery, the story of our ancestors before the first humans appeared remains blurry. It is during this pivotal period that the traits defining our humanity emerged, enabling our genus’ evolutionary success.

Although the identity of our direct pre-Homo ancestor is far from resolved, one fossil group plays a central role in this search: Australopithecus. This genus, to which the famous “Lucy” belongs (discovered 50 years ago in Ethiopia), inhabited much of Africa and survived for over two million years. Australopithecus is known from many fossil remains, but often these are highly fragmentary, isolated, and have sometimes been distorted over the millions of years they have been buried. Notably, only a handful of skulls preserve nearly the entire face, a part of our anatomy that has profoundly shaped who we are today.

Through digestive, visual, respiratory, olfactory and non-verbal communication systems, the face is at the heart of interactions between individuals and their physical and social environments.

Significant changes occurred in the facial region throughout human evolution, with most structures generally becoming less robust. However, the factors driving these changes remain unclear. Were they caused by shifts in diet, social behavior, or both? Only the discovery of more complete skulls can clarify this debate, and this is why the skull of Little Foot is crucial.

The ‘Cradle of Humankind’

South Africa has been and remains a crucial region for research into human origins. A century ago, the iconic “Taung Child” was published in Nature as a representative of a new African branch of humanity, Australopithecus. While scientific attention had previously focused on Eurasia, this discovery inspired decades of exploration and major finds across Africa.

In particular, South Africa saw a proliferation of paleontological sites in a region now UNESCO-listed and known as the “Cradle of Humankind.” Among these, Sterkfontein has proven exceptionally rich in fossils, many attributed to the hominin genus Australopithecus, and including numerous remarkably preserved specimens.

But it was in 1994 and 1997 that Sterkfontein yielded its most spectacular find: the skeleton of Little Foot, over 90% complete, and the oldest human ancestor found in southern Africa. To date, it is the most complete Australopithecus skeleton ever discovered, far surpassing Lucy, of which only 40% of the anatomy is preserved.

Our team has been studying this skeleton since its complete excavation concluded in 2017. The skull, in particular, has been the focus of our attention, as it is relatively complete, preserving all parts of the head—the cranium and mandible. However, 3.7 million years of burial underground have fragmented and displaced parts of its fossilized face. This process is especially visible in the forehead and eye sockets (orbits), making it impossible to quantitatively analyze these informative areas. Given the exceptional and unique nature of this fossil, we decided to harness the most recent technological advances in imaging to restore the face of Little Foot.

‘Little Foot’ in Europe

Creating a digital copy of Little Foot was essential to allow the virtual isolation and repositioning of the fragments without damaging the original skull. However, conventional X-ray scanning technologies have limitations. Through the burial and fossilization process, cavities were created in Little Foot’s skull as soft tissues disappeared and filled with sediment. As a result, X-rays struggle to penetrate this extremely dense sedimentary matrix, limiting image contrast and quality. After several unsuccessful attempts, we turned to a more powerful alternative: synchrotron radiation scanning. A synchrotron is a high-energy particle accelerator used to produce ultra-high-resolution images (at a micron or even sub-micron scale).

With this in mind, we took Little Foot’s skull to England for scanning at the I12 beamline of the Diamond Light Source synchrotron. In the summer of 2019, Little Foot made its first journey outside Africa, carefully escorted across the world and housed in a secure vault during its stay in the UK.

A new face for Australopithecus

Several days were required to scan the entire skull at a resolution of 21 microns. The exceptional images generated revealed intimate details of Little Foot’s anatomy, and also provided the necessary data for facial reconstruction. However, the high quality of the data came at a computational cost: over 9,000 images were generated, representing terabytes of information to process. To virtually isolate the fragments, these images were processed using the supercomputer at the University of Cambridge (England). Once rendered in 3D, the fragments were repositioned according to their anatomical location, and missing parts were recreated to finally restore the complete face of Little Foot.

The size and shape of Little Foot’s orbits, previously obscured by displaced fragments, are among the most striking features of our reconstruction. In primates, the orbital region is heavily influenced by functional (visual) and behavioral (ecological) adaptations. Little Foot’s proportionally large orbits compared to other hominins suggest a strong reliance on sensory information, likely for foraging. This hypothesis is supported by a previous study showing that its visual cortex was more developed than that of modern humans.

The second major result of this study has implications for our understanding of the relationships between Australopithecus groups living in Africa between two and four million years ago. Although the comparative sample is limited, it includes specimens from both East and South Africa. Surprisingly, Little Foot, from a South African site, shows strong similarities with East African specimens. These similarities may indicate that Little Foot shared close ancestors with East African populations, while its probable descendants in South Africa later developed distinct anatomy through local evolution.

While the face provides valuable insights into our ancestors’ adaptations to their environment, the rest of Little Foot’s skull will offer further key elements for understanding our evolutionary history. Notably, the braincase, affected by “plastic” deformation, will require similar work to reconstruct and explore the neurological features of this fossil group.

With Evo 2, AI can model and design the genetic code for all domains of life

The DNA foundation model Evo 2 has been published in the journal Nature. Trained on the DNA of over 100,000 species across the entire tree of life, Evo 2 can identify patterns in gene sequences across disparate organisms that experimental researchers would need years to uncover. The machine learning model can accurately identify disease-causing mutations in human genes and is capable of designing new genomes that are as long as the genomes of simple bacteria.

Open access tools and visualizations

Evo 2 was developed by scientists from Arc Institute and NVIDIA, convening collaborators across Stanford University, UC Berkeley, and UC San Francisco. The model’s code is publicly accessible from Arc’s GitHub, and is also integrated into the NVIDIA BioNeMo framework, as part of a collaboration between Arc Institute and NVIDIA to accelerate scientific research.

Arc Institute also worked with AI research lab Goodfire to develop a mechanistic interpretability visualizer that uncovers the key biological features and patterns the model learns to recognize in genomic sequences. The Evo team has shared its training data, training and inference code, and model weights, making it the largest-scale, fully open-source AI model to date.

Scaling up biological training data

Building on its predecessor Evo 1, which was trained entirely on single-cell genomes, Evo 2 is the largest artificial intelligence model in biology to date, trained on over 9.3 trillion nucleotides—the building blocks that make up DNA or RNA—from over 128,000 whole genomes as well as metagenomic data.

In addition to an expanded collection of bacterial, archaeal, and phage genomes, Evo 2 includes information from humans, plants, and other single-celled and multicellular species in the eukaryotic domain of life.

“Our development of Evo 1 and Evo 2 represents a key moment in the emerging field of generative biology, as the models have enabled machines to read, write, and think in the language of nucleotides,” says Patrick Hsu, Arc Institute Co-Founder, Arc Core Investigator, an Assistant Professor of Bioengineering and Deb Faculty Fellow at University of California, Berkeley, and a co-senior author on the paper.

“Evo 2 has a generalist understanding of the tree of life that’s useful for a multitude of tasks, from predicting disease-causing mutations to designing potential code for artificial life. We’re excited to see what the research community builds on top of these foundation models.”

Learning patterns written by evolution

Evolution has encoded biological information in DNA and RNA, creating patterns that Evo 2 can detect and utilize.

“Just as the world has left its imprint on the language of the Internet used to train large language models, evolution has left its imprint on biological sequences,” says co-senior author Brian Hie, an Assistant Professor of Chemical Engineering at Stanford University, the Dieter Schwarz Foundation Stanford Data Science Faculty Fellow, and Arc Institute Innovation Investigator in Residence.

“These patterns, refined over millions of years, contain signals about how molecules work and interact.”

High-performance infrastructure and architecture

Evo 2 was trained for several months on the NVIDIA DGX Cloud AI platform via AWS, utilizing over 2,000 NVIDIA H100 GPUs and bolstered by collaboration with NVIDIA researchers and engineers.

The model can process genetic sequences of up to 1 million nucleotides at once, enabling it to understand relationships between distant parts of a genome.

Achieving this technical feat required the research team to reimagine how an AI model could quickly ingest and make inferences about this scale of data. The resulting AI architecture, called StripedHyena 2, enabled Evo 2 to be trained with 30 times more data than Evo 1 and reason over eight times as many nucleotides at a time.

Applications in disease and synthetic biology

The model already shows enough versatility to identify genetic changes that affect protein function and organism fitness. For example, in tests with variants of the breast cancer-associated gene BRCA1, Evo 2 achieved over 90% accuracy in predicting which mutations are benign versus potentially pathogenic.

Insights like this could save countless hours and research dollars needed to run cell or animal experiments, by finding genetic causes of human diseases and accelerating the development of new medicines.

In the year since its preprint release, researchers have applied the model to a range of scientific problems, from predicting genetic disease risk in Alzheimer’s patients to assessing variant effects across domesticated animal species.

Arc researchers have also used Evo 2 to design functional synthetic bacteriophages, demonstrating potential applications for treating antibiotic-resistant bacteria.

Future possibilities and targeted therapies

In addition to genetic analysis, Evo 2 could be useful for engineering new biological tools or treatments. “If you have a gene therapy that you want to turn on only in neurons to avoid side effects, or only in liver cells, you could design a genetic element that is only accessible in those specific cells,” says co-author and computational biologist Hani Goodarzi, an Arc Core Investigator and an Associate Professor of Biochemistry and Biophysics at the University of California, San Francisco.

“This precise control could help develop more targeted treatments with fewer side effects.”

The research team envisions that more specific AI models could be built with Evo 2 as a foundation. “In a loose way, you can think of the model almost like an operating system kernel—you can have all of these different applications that are built on top of it,” says Arc’s Chief Technology Officer Dave Burke, a co-author on the paper.

“From predicting how single DNA mutations affect a protein’s function to designing genetic elements that behave differently in different cell types, as we continue to refine the model and researchers begin using it in creative ways, we expect to see beneficial uses for Evo 2 we haven’t even imagined yet.”

Ethics, safety, and global impact

In consideration of potential ethics and safety risks, the scientists excluded pathogens that infect humans and other complex organisms from Evo 2’s base dataset, and ensured that the model would not return productive answers to queries about these pathogens. Co-author Tina Hernandez-Boussard, a Stanford Professor of Medicine, and her lab members assisted the team to implement responsible development and deployment of this technology.

“Evo 2 has fundamentally advanced our understanding of biological systems,” says Anthony Costa, director of digital biology at NVIDIA.

“By overcoming previous limitations in the scale of biological foundation models with a unique architecture and the largest integrated dataset of its kind, Evo 2 generalizes across more known biology than any other model to date—and by releasing these capabilities broadly, Arc Institute has given scientists around the world a new partner in solving humanity’s most pressing health and disease challenges.”

Injectable ‘satellite livers’ could offer an alternative to liver transplantation

More than 10,000 Americans who suffer from chronic liver disease are on a waitlist for a liver transplant, but there are not enough donated organs for all of those patients. Additionally, many people with liver failure aren’t eligible for a transplant if they are not healthy enough to tolerate the surgery.

To help those patients, MIT engineers have developed “mini livers” that could be injected into the body and take over the functions of the failing liver.

In a new study in mice, the researchers showed that these injected liver cells could remain viable in the body for at least two months, and they were able to generate many of the enzymes and other proteins that the liver produces.

“We think of these as satellite livers. If we could deliver these cells into the body, while leaving the sick organ in place, that would provide booster function,” says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science (IMES).

Bhatia is the senior author of the new study, which is published in the journal Cell Biomaterials. MIT postdoc Vardhman Kumar is the paper’s lead author.

Restoring liver function

The human liver plays a role in about 500 essential functions, including regulation of blood clotting, removing bacteria from the bloodstream, and metabolizing drugs. Most of these functions are performed by cells called hepatocytes.

Over the past decade, Bhatia’s lab has been working on ways to restore hepatocyte function without a surgical liver transplant. One possible approach is to embed hepatocytes into a biomaterial such as a hydrogel, but these gels also have to be surgically implanted.

Another option is to inject hepatocytes into the body, which eliminates the need for surgery. In this study, Bhatia’s lab sought to improve on this strategy by providing an engineered niche that could enhance the cells’ survival and facilitate noninvasive monitoring of graft health.

To achieve that, the researchers came up with the idea of injecting cells along with hydrogel microspheres that would help them stay together and form connections with nearby blood vessels. These spheres have special properties that allow them to act like a liquid when they are closely packed together, so they can be injected through a syringe and then regain their solid structure once inside the body.

In recent years, researchers have explored using hydrogel microspheres to promote wound healing, as they help cells to migrate into the spaces between the spheres and build new tissue. In the new study, the MIT team adapted them to help hepatocytes form a stable tissue graft after injection.

“What we did is use this technology to create an engineered niche for cell transplantation,” Kumar says. “If the cells are injected in the absence of these spheres, they would not integrate efficiently with the host, but these microspheres provide the hepatocytes with a niche where they can stay localized and become connected to the host circulation much faster.”

The injected mixture also includes fibroblast cells—supportive cells that help the hepatocytes survive and promote the growth of blood vessels into the tissue.

Working with Nicole Henning, an ultrasound research specialist at the Koch Institute, the researchers developed a way to inject the cell mixture using a syringe guided by ultrasound. After injection, the researchers can also use ultrasound to monitor the long-term stability of the implant.

In this study, the mini livers were injected into the fat tissue in the belly. In the future, similar grafts could be delivered to other sites in the body, such as into the spleen or near the kidneys. As long as they have enough space and access to blood vessels, the injected hepatocytes can function similarly to hepatocytes in the liver.

“For a vast majority of liver disorders, the graft does not need to sit close to the liver,” Kumar says.

An alternative to transplantation

In tests in mice, the researchers injected the mixture of liver cells and microspheres into an area of fatty tissue known as the perigonadal adipose tissue. Once the cells are localized in the body, they form a stable, compact structure. Over time, blood vessels begin to grow into the graft area, helping the injected hepatocytes to stay healthy.

“The new blood vessels formed right next to the hepatocytes, which is why they were able to survive,” Kumar says. “They were able to get the nutrients delivered right to them, they were able to function the way they’re supposed to, and they produced the proteins that we expect them to.”

After injection, the cells remained viable and able to secrete specialized proteins into the host circulation for eight weeks, the length of the study. That suggests that the therapy could potentially work as a long-term treatment for liver disease, the researchers say.

“The way we see this technology is it can provide an alternative to surgery, but it can also serve as a bridge to transplantation where these grafts can provide support until a donor organ becomes available,” Kumar says. “And if we think they might need another therapy or more grafts, the barriers to do that are much less with this injectable technology than undergoing another surgery.”

With the current version of this technology, patients would likely need to take immunosuppressive drugs, but the researchers are exploring the possibility of developing “stealthy” hepatocytes that could evade the immune system, or using the hydrogel microspheres to deliver immunosuppressants locally.

Synthetic gene medicines may disrupt DNA repair

Antisense oligonucleotides (ASOs), used to treat genetic diseases, can affect how cells repair damage to their DNA. This is shown in a new study from Karolinska Institutet, published in Nature Communications. The findings may have implications for the development of future genetic medicines and deepen our understanding of how RNA, natural counterparts to ASOs, participate in DNA repair systems.

ASO molecules are synthetic, short nucleic acid molecules used to regulate gene expression. They are included in several approved gene therapies and are being evaluated in many ongoing clinical trials. In the current study, researchers at Karolinska Institutet examined how these molecules influence the cellular systems responsible for detecting and repairing DNA damage.

The researchers discovered that ASO molecules can bind to some of the most important DNA repair enzymes in cells. When ASOs bind to these proteins, they accumulate in dense clusters in the cell nucleus, known as condensates or “PS bodies.” This also occurs at concentrations typically used in laboratory experiments.

Incorrect activation of DNA repair signals

The result is that the cell activates a repair signal even though there is no actual DNA damage, which may disrupt the natural repair process and lead to the buildup of harmful DNA alterations.

“Our results show that ASOs can trigger a repair response that should not normally be activated, and this could affect the cell’s normal handling of DNA damage,” says Marianne Farnebo, research group leader at the Department of Oncology-Pathology and the study’s senior author.

At the same time, she emphasizes that although the results may seem concerning, they must be put into context.

“It is important to distinguish between the ASO treatment we primarily studied and clinically used methods, where much lower concentrations of ASOs reach the cell nucleus.”

Important questions for the future of genetic medicines

The researchers stress that more and larger studies are needed to assess potential risks. But since ASO-based therapies are already used in clinical care and many more are under development, the findings are already of importance. They may contribute to improved safety assessments and influence how future molecules are designed.

“Furthermore, our results show that the impact on DNA repair can occur in several distinct ways, not just through the clusters formed in the cell nucleus,” says Linn Hjelmgren, doctoral student at the same department and the study’s first author.

The study is based on advanced biochemical analyses and microscopy, in which the researchers monitored how ASOs behave in cell models.

From high‑tech greenhouses to fruit netting: How protected cropping can shield crops from climate extremes

For many of us, food is something we buy at a supermarket or order at a café. We usually give little thought to the complex systems required to produce and deliver it—until they stop working. It’s not common to think of Australia as a place at risk of food insecurity. It has vast tracts of fertile land and the capacity to feed its population many times over. Around 70% is exported.

But the searing southeast heat and widespread northern flooding this summer demonstrate the very real risks to food production. Temperature extremes, heat waves, droughts, floods, and shifting seasonal patterns are worsening as the climate changes.

People can seek refuge indoors. But the plants and animals we rely on for food have no such protection. In response, some orchardists and farmers are taking up an approach known as protected cropping, where crops are shielded from threats. As South Australian persimmon and avocado grower Craig Burne told the ABC:

“Without misting and netting in place, I don’t think we’d successfully grow either of these crops in this climate any more.”

As climate change intensifies, protected cropping could better safeguard some crops. Overseas, nations such as the Netherlands have taken up protected cropping to drastically boost fruit and vegetable exports. But it’s early days in Australia. To grow, the sector will have to overcome barriers to growth.

What defines protected cropping?

Protection is broadly defined. It can range from low-tech solutions such as shade houses and netting to medium-technology polytunnels (hoop-shaped plastic covers) through to highly sophisticated automated glasshouses.

Countries facing land constraints such as the Netherlands have been the most enthusiastic in taking up this approach. Guided by the principle of “twice the food using half the resources,” Netherlands farmers have turned to high-tech glasshouses.

The result has been remarkable: a country with extremely limited agricultural land has become a top exporter of fruit and vegetables.

Emerging in Australia

In Australia, protected cropping is gaining popularity off a small base. In 2023, around 14,000 hectares of fruit and vegetable crops were growing under some form of protection. That’s around 17% of the total area.

Most of this area relies on low-tech systems, however. Just over two-thirds (68%) of all protected cropping areas rely on low-tech shade houses or netting, mainly in southern Queensland and northern New South Wales.

Medium-tech systems such as polytunnels and polyhouses account for about 30% of the total. These systems are found mainly in Tasmania, northern Queensland, and Western Australia.

High-tech glasshouses account for only 2% of the total. These are primarily found near bigger cities such as Sydney, Melbourne, and Adelaide.

To date, farmers have relied on protected cropping for high-value crops such as tomatoes, capsicums, cucumbers, berries, leafy greens, and more expensive tree crops.

In 2022, Australia’s protected cropping industry was worth an estimated A$100 million to farmers. Demand for workers in the sector is growing at 5% a year, and around 10,000 people worked in the industry as of 2022.

Real benefits—at a cost

For farmers, protected cropping offers clear advantages across low-, medium-, and high-tech approaches.

These methods can create an environment favorable to year-round plant growth, improving the consistency and quality of yields. By controlling factors such as temperature, plant nutrition, humidity, light, and pests, protected cropping reduces production risks and increases crop yield and quality.

For farmers, being able to control their environment in a predictable way is particularly valuable in an uncertain climate. Protecting crops means less (but not zero) risk from extreme weather. Other benefits include more efficient use of land, water, fertilizer and energy.

Crops can also be cultivated closer to markets. This improves food freshness, lowers transport emissions, and strengthens domestic food security.

For exporters, produce grown in protected systems is more likely to meet stringent biosecurity and quality standards of overseas buyers.

Innovation is essential to unlock these benefits at scale. Advances in plant breeding, sensors, automation, data analytics, controlled supply of nutrients, lighting systems, and biological controls for pests and plant diseases can significantly boost farm production, profits, and sustainability.

What’s stopping protected cropping?

Australia’s farmers are highly exposed to extreme weather events and the changing water cycle. Protected cropping would seem to be a logical way to control some of these risks.

To date, protected cropping hasn’t achieved scale in Australia. That’s because the horticulture industry is dominated by small businesses with limited capacity to invest in new systems.

High-tech protected cropping systems offer the best results, but the cost is enough to put off many farmers. Finding and keeping skilled workers is another challenge.

Scaling up won’t just happen

Protected cropping is an excellent solution. But it’s out of reach for many farmers who would benefit.

In nations such as Sweden and the Netherlands, governments have worked to encourage the uptake of protected cropping and boost exports of fruit and vegetables, through world-class research and innovation precincts.

Australia’s federal and state governments could accelerate uptake by setting targets to expand protected cropping areas, encourage adoption through policy levers, investing in joint infrastructure and incentives to cut installation costs.

A good start could be to focus on areas where high-value crops are grown in unprotected environments and work to create regional clusters of expertise, shared infrastructure, and skilled jobs.

Governments can’t do it without buying-in from industry bodies, researchers, and farmers. Translating innovation from laboratory to field is never easy. But it can—and arguably must—be done, as Australia’s farmers face a very uncertain climate.

Protected cropping is not a silver bullet. Polytunnels can’t protect against floods, for instance. But other countries have successfully used these methods to boost yields, safeguard local food production, and create new higher wage jobs. It could do the same here.

Researchers create world’s largest dog and cat tumor database

Researchers from the University of Liverpool’s Veterinary Data Science Group and the University of Las Palmas de Gran Canaria have created the world’s largest open-source database of canine and feline tumors, containing more than one million records. This unique resource aims to help transform understanding of factors influencing the risk of pets getting cancer.

The team brings together expertise in veterinary pathology, epidemiology, data science and clinical practice. By working with veterinary diagnostic laboratories and applying advanced methods for extracting and standardizing diagnostic data, they have created a unified resource.

An element of the team’s work that focuses on dog tumors is explored in a paper in Veterinary and Comparative Oncology, titled “Epidemiology of Four Major Canine Tumours in the UK: Insights From a National Pathology Registry With Comparative Oncology Perspectives.”

The size of the tumor registry makes it possible to study rare cancers and uncommon breeds in meaningful detail for the first time. Researchers worldwide can now access rich and standardized data to explore patterns previously hidden by fragmented reporting.

David Killick, Professor of Veterinary Oncology at the University of Liverpool, said, “It is important to understand risks for cancers—and this applies to pets too. But for dogs and cats, most cancer diagnosis data sit in private veterinary labs, inaccessible for research. Working through SAVSNET , our Small Animal Veterinary Surveillance Network, we wanted to see whether we could bring together large volumes of these data into one meaningful, research-ready database.

“This tumor registry is a major step towards better understanding cancer risk in pets. In addition to allowing better identification of breed-related risk of specific tumor types, early analyses have raised the question of how neutering practices may influence risks of particular cancers. The scale of the data also opens new possibilities for exploring the genetic basis of these cancers.”

Jose Rodríguez Torres, Ph.D., a Veterinary Data Scientist at the University of Las Palmas de Gran Canaria, added, “Analyzing cancer diagnoses is well established in human medicine, but similar work in animals has lagged behind due to fragmented data. This study is a leading step forward. With more than 200 breeds and more than 150 tumor types represented, these data can now be explored by researchers worldwide to better understand cancer risk across many tumor–breed combinations.”

Dr. Francesco Cian from BattLab, one of the participating labs, and a co-author on the paper, said, ” It has been a pleasure to work with University of Liverpool and ULPGC on this project and to see a new use for the data we generate. Typically, our results are used by veterinarians to support owners and their pets. In this research, we were able to collate anonymized results and generate new knowledge about the tumor risk faced by individual pets across a wide range of cancers.”

The team plans to expand the registry by collaborating with additional laboratories and continues to collect data in real-time. As the Registry grows, analysis can be further refined—for example, by better understanding how dogs with tumors compare to the wider UK canine population.

Why crowning the protein that makes jellyfish glow green as a model can help scientists streamline biology

Fruit flies, mice, zebrafish, yeast and the tiny worm C. elegans are model organisms that have carried modern biology on their backs.

Scientists did not choose them for their charisma. They were chosen because their similarities illuminate biological principles across many species. Their biology is simple enough for researchers to master yet deep enough to keep delivering new insights centuries later.

But biologists don’t have a common reference point for a vast area of the field: proteins, the cell’s doers. Proteins catalyze chemical reactions, give cells their structure and help them communicate with each other. Most organisms use tens of thousands of protein types, and each can be mutated, modified and measured in different ways and in countless environments. Thanks in part to artificial intelligence, researchers are also generating new proteins faster than they can study them.

Without a shared reference point, study results are hard to compare. Two labs can study the same protein under different experimental conditions and end up with findings that do not line up. The result is a scientific literature full of isolated findings that are sometimes duplicated and difficult to generalize.

As a computational chemist who studies fluorescent proteins, I argue that labs also need a set of model proteins. Like how fruit flies and mice anchor whole fields, model proteins can help researchers build on each other’s findings and better understand the fundamentals of biology.

Green fluorescent protein as a model

If model proteins are to be yardsticks, the best place to start is with proteins researchers already reach for when they need a reliable standard. Green fluorescent protein is at the top of that list.

Green fluorescent protein, first isolated from a jellyfish, glows bright green when under a blue light. Biologists fuse green fluorescent protein to other proteins to track where the proteins go and when they are made.

Green fluorescent protein is already a de facto reference point for the field, used as a practice protein in experiments before attempting bigger goals. In the early 2000s, researchers used the protein and a yellow version in cloned pigs to show that foreign genes could be added to large mammals and reliably work. Green fluorescent protein made it obvious that the new gene was successfully incorporated because researchers could literally see that the pigs’ cells were making the protein encoded by the fluorescence genes.

The long-term aim of these experiments was to engineer pigs to produce specific human proteins that help the immune system accept a pig organ rather than reject it. Green fluorescent protein helped show that the basic engineering of this idea could work, which eventually led to the first pig-to-human kidney transplants.

The use of green fluorescent protein is not the endpoint of most studies but the proof step. It allows researchers to say, yes, the new gene is there, the cell is making the protein, the protein is working and will probably work with other proteins.

AI is forcing benchmarks

When researchers are hunting for new proteins to use as enzymes, treatments or materials, protein language models and other generative AI methods can propose huge numbers of plausible protein sequences for them to test. While some AI-designed proteins do work in the lab and can help reduce trial and error, many candidate proteins fail.

Fluorescent proteins can be a useful stress test for protein language models. The hardest part of using AI to generate proteins is proving that the sequences it suggests can become a properly folded, working protein.

Green fluorescent protein makes that proof straightforward because fluorescence allows you to quickly see that the protein has folded correctly. You can predict the brightness, stability or color of fluorescent proteins, then directly check whether the AI-generated protein matches. Like a mouse study that hints a drug might work in humans, green fluorescent protein doesn’t guarantee an AI model will succeed on every protein, but it’s a quick, widely trusted sign that the design pipeline is doing something right.

Calling green fluorescent protein a model protein would also improve how biology is taught. Like classic model organisms, green fluorescent protein is safe and visual. It is also forgiving, producing a clear, fluorescent signal even when student study designs aren’t perfect.

These traits make it an educational gateway to ideas such as gene expression, protein folding and bioengineering. It can turn an abstract concept into something you can see in a test tube or under a microscope.

Model organisms work because scientific communities agreed to build around common reference points. I believe protein science is now vast enough to need the same, and naming green fluorescent protein as a model protein could make it easier to connect discoveries, teach students and assess new tools.

The glow, in other words, can still guide scientists—not just by dazzling, but by helping the whole field add up.

Bringing quantum ideas to the messy world of disordered proteins

Imagine trying to design a key for a lock that is constantly changing its shape. That is the exact challenge we face in modern drug discovery when dealing with intrinsically disordered proteins.

For decades, the classic “structure-function” paradigm in biology taught us that a protein’s amino acid sequence encodes a single, unique, and stable 3D structure, which in turn dictates what it does. However, nature is far more rebellious. A significant portion of the proteome—including roughly 79% of human cancer-associated proteins—defies this rule. These proteins contain intrinsically disordered regions (IDRs) that lack a stable folded structure under normal conditions. Instead of folding neatly, they exist as dynamic, shifting ensembles of conformations.

Because of their shapeshifting nature, IDRs have long been labeled “undruggable” by traditional structure-based drug design, which relies on stable 3D pockets to anchor small molecules. Even brilliant, revolutionary AI tools like AlphaFold2, trained on databases of strictly folded structures, struggle to capture the ensemble nature of these flexible regions.

Exhaustively searching the massive, flat energy landscape of an IDP requires traversing numerous degrees of freedom—an enormous computational task even for state-of-the-art supercomputers. This is where quantum computing offers a fresh advantage. By leveraging the principles of quantum mechanics, we can potentially explore these complex energy landscapes more effectively to find the lowest-energy conformations.

The paper is published in the journal PLOS One.

Quantum for biologists

My Ph.D. work set out to bridge the gap between abstract quantum algorithms and practical biological applications. We developed QuPepFold, a modular Python package specifically designed to democratize hybrid quantum-classical protein folding simulations.

Currently, there is a significant entry barrier for structural biologists who want to harness quantum resources but don’t have a background in quantum programming. QuPepFold acts as an automated abstraction layer. It hides the intimidating technical details of circuit construction and error mitigation, allowing researchers to simply input a peptide sequence and retrieve meaningful 3D structural data.

Here is how we do it under the hood: we map the discrete 3D conformational space of a peptide onto a tetrahedral lattice, where each amino acid’s position is specified by binary codes representing directional turns. To find the thermodynamic ground state—the protein’s native, most stable form—we use a Variational Quantum Eigensolver (VQE).

Filtering the noise

However, current quantum hardware is notoriously noisy. To combat this, we optimized our algorithm using a Conditional Value-at-Risk (CVaR) objective function. Instead of calculating the average energy of all the folds the quantum circuit predicts, the CVaR approach focuses strictly on the “tail” of the lowest-energy measurement results.

Moving forward

We designed QuPepFold to be hardware-agnostic; it currently runs seamlessly across Qiskit Aer simulators, Braket’s tensor-network simulator, and real physical devices like IonQ’s Aria-1 quantum computer. When we ran our simulations on the IonQ Aria-1, the algorithm successfully reproduced ground-state energies with over 90% fidelity. The momentum behind this project has been incredibly validating, especially following our recent acceptance into the IBM Qiskit ecosystem and securing AI Innovation Challenge infrastructure funding.

If you’re looking for hype, QuPepFold is the wrong story. This work does not “solve protein folding,” and it doesn’t magically make IDRs easy. It is a coarse-grained lattice-based approach targeted at very short peptides, and it still requires extensive sampling and has to contend with hardware noise.

Even conceptually, focusing on the ground state can miss biologically relevant ensemble diversity—an IDR’s function may depend on populated higher-energy states, not just the minimum. I see QuPepFold as a practical bridge: a reproducible way to experiment with quantum objectives (like CVaR) in a biophysically motivated pipeline today, while keeping the architecture modular enough to evolve as hardware and algorithms improve.