Natural competition between brain circuits may boost information processing

Over the past decades, neuroscience studies have painted an increasingly detailed picture of the human brain, its organization and how it supports various functions. To plan and execute desired behaviors in changing circumstances, networks of neurons in the brain can either work together or suppress each other, thus employing both cooperative and competitive interaction strategies.

Researchers at University of Oxford, University of Cambridge, McGill University, University of Aarhus and Pompeu Fabra University recently set out to better understand the mammalian brain’s underlying dynamics, specifically how its underlying architecture balances cooperative and competitive interactions between neural circuits. Their paper, published in Nature Neuroscience, offers new insight that could both improve the understanding of the brain and inform the development of brain-inspired computational models.

“Building models of the brain is an important part of modern neuroscience,” Andrea Luppi, first author of the paper, told Medical Xpress. “As Nobel winner Reichard Feynman said, ‘what I cannot create, I do not understand.’ Most current models, however, share a limitation. Everyday experience, from focusing attention or switching between tasks, also reveals that brain systems must compete for limited resources.

“Our brains cannot do everything at once, and not all regions can be active together all the time. Yet most macroscale brain simulations run in the past 20 years have not taken competitive interactions into account, instead forcing regions to cooperate.”

Superior computational performance of connectome-based neuromorphic networks with competitive generative interactions. Credit: Nature Neuroscience (2026). DOI: 10.1038/s41593-026-02205-3
In earlier computer simulations of the human brain, the activation of one brain region prompted downstream neighboring regions to also increase their activity. This often resulted in an excess of cooperation between connected regions, which led to the emergence of overly synchronized states that are rarely observed in the human brain or in the brains of other mammals.

“We had a simple question: should we include competition in our models and would this lead to better models?” said Luppi.

Modeling the brains of mice, macaques and humans

Luppi and his colleagues built computational models that simulated activity across the entire mammalian cortex. These models were created using available imaging data showing brain connections in the brains of humans, macaques and mice.

“We compared two types of brain models: the traditional one in which all interactions between brain regions were cooperative, and another in which regions could either excite or suppress each other’s activity (essentially competing for who is active),” explained Luppi.

“Across all three species, the models that included competitive interactions consistently outperformed cooperative-only models, being more realistic. Competitive interactions act as a stabilizing force, preventing runaway activity and allowing different brain systems to take turns in shaping the direction of the brain’s ebbs and flows.”

The researchers showed that computational models of the brain performed better when they included both cooperative and competitive interactions. This finding was consistent with models based on the wiring of the human, macaque and mouse brains.

“Crucially, computer models that included competitive interactions were not only more accurate overall compared with cooperative-only models, but also more individual-specific, better capturing the unique ‘brain fingerprint’ that distinguishes one person’s brain from another’s,” said Luppi.

“This is important because digital twins (i.e., virtual replicas of the brains or other organs of specific individuals) are increasingly proposed as tools for testing treatments virtually, before applying them to real people. If these models fail to capture the fundamental principles of each patient’s unique brain organization, their predictions won’t be personalized, and, in the worst case, even misleading.”

Implications for neuroscience research and brain-inspired computing

The results of this recent study highlight the importance of including both competitive and cooperative interactions between neural circuits in computer simulations of the mammalian brain or when developing brain-inspired computational models. In the future, they could inform the creation of better performing or more human-like artificial intelligence (AI) systems, as well as more accurate digital twins.

“Using the competitive model to predict the effects of different treatments in patients will be one of the most exciting next steps towards development of better models for personalized medicine,” added Luppi. “Looking ahead, the general principles of brain organization across species offer a path forward for understanding the principles of intelligent architectures and using them to shape the next generation of artificial intelligence.”

How to contain avian flu H5N1 if human-to-human spread begins

At this point, avian flu H5N1 is thought to have very limited ability to transmit between humans, but a recent case in British Columbia with an unknown source of transmission has piqued the curiosity and concern of scientists, including York University Professor Seyed Moghadas. Did this lone case come about through transmission from an animal or another person, and if it was via human transmission, what methods would control its spread in the human population?

Director of York’s Agent-Based Modeling Laboratory in the Center of Excellence in AI for Public Health Advancement, Moghadas and a group of researchers used modeling to understand the best spread control measures should human-to-human transmission become possible.

“The idea was, let’s evaluate some of the interventions that we usually implement at the very earliest stage of a disease outbreak or emerging disease, which we know very little about,” he says.

For the research, “Containment Scenarios for Post-Spillover Transmission Chains of Avian Influenza H5N1 from Poultry to Humans,” published in Nature Health, various scenarios from isolation to vaccination before or after a spillover event were modeled.

It is one of only a few studies that have explicitly modeled outbreak dynamics following spillover into humans or the effectiveness of public health interventions in early and highly uncertain phases of virus development.

As a professor of computational epidemiology and vaccine science in York’s Faculty of Science, Moghadas and his colleagues were already collecting data on H5N1 cases in the United States when the Canadian case arose. Given the unknown nature of transmission, the team decided to pivot their work to look at what was happening in B.C.

“The case in B.C. was of particular interest to us because no definitive source of exposure was identified, including no direct contact with infected animals or known high-risk settings such as poultry farms,” says Moghadas. “Because of that, it came to our attention that maybe there is some sort of transmission going on between humans.”

As far as health and science experts know, H5N1 can only be transmitted among poultry and dairy cattle on farms, as well as through wild birds, and from these animals to humans, but sustained human-to-human transmission has not been established.

The person from B.C., however, had no clearly identified exposure and even though human infection from animals is rare, avian influenza H5N1 is considered highly pathogenic and a potentially serious and evolving threat to global public health.

“This virus was first identified in 1997 in Southeast Asia. This kind of zoonotic virus essentially jumps from the bird or animal side to human side sometimes, mostly it circulates among wild birds,” says Moghadas. “There is no confirmation that human-to-human transmission happens as yet in North America.”

The virus has only been in North America since 2022, but surveillance monitoring for it began in 2003 and up until recently there have been close to 1,000 cases reported globally in humans and just under 500 deaths, although the number of cases could be higher because not all cases are likely reported or symptomatic. The virus has not only expanded its geographical range, but also the animal species it can infect.

“Evolution of influenza viruses of any type is always a challenge for humans. The flu virus is one of the very rapidly mutating pathogens,” he says. The concern is it will mutate to be able to transmit between humans. How viable is it? How easily can it spillover from animals to humans, and how long could the potential chain of transmission from human-to-human become? These are still open questions.

“Quantifying that risk was important for us because that could also give us direction in terms of how bad the disease could be and what strategies will work to contain it,” says Moghadas. “We have very few measures in place or a strategy to deal with it at this point, given that the transmission between humans is not established.”

As it is an avian flu virus, it will likely require two doses of a similar vaccine to what was used during the H1N1 pandemic to reduce the risk and severity which often triggers a higher viral load.

The researchers used Abbottsford, B.C. as the location as it is a highly dense poultry farming area. The starting point is after a spillover has happened.

“If a human becomes infected, how do we block this single individual to trigger a large outbreak? Or if the infection is going on between humans, can we block these chains and to what degree can we block them?” asks Moghadas. “What is the effectiveness of either self-isolation of symptomatic cases or vaccination of farmers or vaccination of farmers and their household members?”

Even with mitigation measures, someone in the farmer’s family could potentially be infected by the farmer and then transmit it to someone in the community.

The team evaluated two different types of vaccination strategies. One was reactive, which means that you trigger a vaccination program after a case has been identified somewhere. The second strategy was pre-emptive—individuals, such as farmers, are vaccinated before any case is identified.

What they found is that reactive vaccination has very limited additional benefits outside of self-isolation, but pre-emptive vaccination adds substantial additional benefits on top of self-isolation.

Should the virus be confirmed to be capable of human-to-human transmission, Moghadas says they want to limit the chain of transmission and minimize the risk of evolution of the virus to become more adapted to human conditions.

For now, he says, when cases are identified, the person should self-isolate immediately. For the authorized vaccine, it should be meted out quickly to target populations, but that could take several weeks to have population level effectiveness.

“Timely action is a critical part of controlling the spread. Self-isolation of symptomatic cases has a significant effect, but that comes with the caveat that we don’t know if everybody who is infected will develop symptoms,” says Moghadas.

“There could be potential asymptomatic cases we don’t identify and by the time we do identify them, they’ve already been infecting others in the chain of transmission. This case in B.C. was particularly concerning because they could not find the source of infection.”

The concern is not only that the virus might be able to jump from animals to humans, but also the potential for it to mutate during early human transmission chains, making it more adaptable to infecting humans. This underscores the risk of local outbreaks with global implications, he says.

“My research is all about evidence generation for governments, health-care providers and policymakers in public health organizations. We are generating evidence that can be used to at the very least limit the potential for this virus to become another pandemic,” says Moghadas.

Gut ‘primes’ pathogenic T cells responsible for neuroinflammation in multiple sclerosis, study finds

Multiple sclerosis (MS) is a debilitating neurological disorder caused by malfunctioning immune responses that target the brain and spinal cord of the central nervous system (CNS). What makes the body turn against itself? Failure of the immune system to distinguish “self” from “non-self” entities leads to excessive autoimmune responses against self-proteins like myelin, which forms a protective covering on the neurons.

Multiple factors influence the onset and progression of MS, including genetic susceptibility, environmental triggers, and, more recently, the gut microenvironment.

Patients with MS exhibit alterations in their gut microbiota, while the gut microbiota and microbial metabolites play a pivotal role in shaping the chronic autoreactive immune responses. However, in trying to define this gut–CNS axis, the cellular mechanisms that relay the gut-derived signals to the immune system to influence autoimmune inflammation in the CNS remain poorly understood.

A study appearing in the journal Science Immunology, uncovers a key mechanistic role for gut immune responses as initiators of neuroinflammation.

This study was led by Dr. Shohei Suzuki, Assistant Professor, Division of Gastroenterology and Hepatology, and Dr. Tomohisa Sujino, Associate Professor, School of Medicine, at Keio University, Japan.

“Increasing evidence shows that the gut microbiota influences neurological diseases such as Parkinson’s, Alzheimer’s, and MS. However, the mechanisms linking gut microbes, intestinal immunity, and brain inflammation remain unclear. We were keen to identify how gut immune responses contribute to neuroinflammatory diseases,” said Dr. Sujino, explaining their motivation for the study.

Building on their previous observation that mild intestinal (ileal) inflammation exists in experimental autoimmune encephalomyelitis (EAE), a mouse model of MS, the authors set out to test whether similar inflammation is present in patients with MS.

By performing single-cell RNA sequencing on intestinal biopsies, the team identified that inflammatory Th17 cells accumulate in the mouse EAE model as well as in the intestine of patients with MS, suggesting a conserved gut–CNS axis that may be active in human diseases.

In both EAE mice and patients with MS, intestinal epithelial cells (IECs) upregulated antigen presentation pathways. In particular, epithelial cells in the ileum had higher expression of major histocompatibility complex class II (MHC II) that presents antigens to CD4+ T cells, and selective deletion of MHC II in IECs reduced pathogenic Th17 cell generation and disease severity.

IECs do not typically present antigens to immune cells. So, the team conducted co-culture assays to test the antigen presentation function of IECs.

Their findings demonstrate that IECs can directly present antigens in an MHC II-dependent manner to prime CD4+ T cells in the gut. Notably, in these assays, IECs induced Th17 polarization of activated CD4+ T cells.

It became clear that the gut was a critical site for immune activation of pathogenic CD4+ T cells that polarized into pro-inflammatory Th17 cells.

To investigate whether the Th17 cells directly contribute to the pool of autoreactive cells in the CNS, they used transgenic mice that express the Kaede protein, which undergoes photoconversion from green to red fluorescence upon exposure to violet light.

This model allowed for precise tracking of pathogenic Th17 cells induced in the intestinal lamina propria that then migrate to the spinal cord and drive neuroinflammation.

Taken together, this study reveals a critical role for MHC II expressed by IECs in the expansion of pathogenic Th17 cells that subsequently migrate to the CNS during EAE, providing a mechanistic link between gut immune responses and autoimmune neuroinflammatory diseases.

This landmark study demonstrates that while systemic circulation allows T cell exchange across immune tissues, the epithelial–immune interactions within the gut mucosal compartment can essentially shape effector T cell responses in the brain.

“While current therapies for MS often target B cells, our study highlights the gut as an important therapeutic site. Modulating the intestinal microbiota or antigen-presenting activity of IECs represents new approaches to treating autoimmune neurological diseases,” explains Dr. Suzuki, emphasizing the therapeutic implications of their findings.

Study finds M-CHAT autism screening misses 38% of high-risk toddlers

M-CHAT does not catch all children with autism in the neonatal high-risk group, shows a study from Karolinska Institutet published in JAMA Network Open. The researchers see a need to supplement the test with other assessment methods.

Children born very prematurely or with complications are screened at the age of two for early signs of autism using the M-CHAT questionnaire. In a new national study, researchers at Karolinska Institutet have investigated how well the test works in this high-risk group. The study includes 2,178 children born in Sweden between 2013 and 2019 and compares M-CHAT results with later clinical diagnoses of autism.

The researchers found that the test was highly accurate in ruling out autism, but that many children with autism were still missed. The so-called sensitivity was 62%, while the specificity—the ability to identify children without autism—was 91%. In total, 12% of the children received a positive M-CHAT result and 6% were later diagnosed with autism.

“The results show that M-CHAT works relatively well to rule out autism, but that it does not catch all children who later receive a diagnosis. In this high-risk group, more tools are therefore needed to detect children who need further investigation early,” says Ulrika Ådén, professor at the Department of Women’s and Children’s Health.

Children born extremely prematurely had both the highest proportion of positive test results and the most autism diagnoses. The researchers also saw that girls had fewer positive test results than boys, and that linguistic factors could affect the outcome—the test had higher specificity in families that spoke a Scandinavian language.

“Overall, the study shows that other developmental difficulties, such as motor or sensory problems, can affect how M-CHAT is interpreted. This needs to be taken into account when health care works with early screening,” says Ådén.

Small RNAs offer new clues to schizophrenia and bipolar disorder

For decades, scientists studying brain disorders have focused almost exclusively on proteins and the genes encoding them. Now, research from Thomas Jefferson University’s Computational Medicine Center suggests that several classes of small regulatory molecules, fittingly known as small RNAs, may play a much larger role in schizophrenia and bipolar disorder, and in a healthy brain, than previously thought.

In a study recently published in Translational Psychiatry, a team led by Isidore Rigoutsos, Ph.D. took a comprehensive look at small RNAs in brain samples from people with schizophrenia, bipolar disorder and individuals without psychiatric illness. Their goal was to find out what kind of small RNAs are active in the brain, and whether their levels change in disease.

“Little attention had been paid to small RNAs in these disorders,” says Dr. Rigoutsos, “even though small RNAs help control numerous processes by modulating the abundance of genes.”

One well-known group, called microRNAs, had been studied but not extensively.

“If you only look at one class, you may be missing important regulatory events,” adds Dr. Rigoutsos.

To capture the broader picture, researchers used deep sequencing and specialized computational tools developed in the Rigoutsos lab. This allowed them to analyze multiple classes of small RNAs at once, and they found that microRNAs account for just over half of all small RNAs in the brain. The remainder comes from the other classes the Rigoutsos team studies. The team found that these other RNAs may regulate critical processes in schizophrenia and bipolar disorder, as well as in healthy brains.

Also, a surprising pattern emerged when the team separated participants by age. The small RNA profiles of young patients looked substantially different than those of healthy, young people. Yet, those differences disappeared when the researchers compared the profiles from the brains of older patients with those from older individuals without mental illness.

“It turns out that the differences in the small RNA populations happen early on in patients’ lives,” Dr. Rigoutsos says.

The findings highlight the growing importance of data-driven, collaborative science.

“To understand complex disease,” Dr. Rigoutsos continues, “we need to study all the molecules that are present and work across disciplines.”

Implantable islet cells could control diabetes without insulin injections

Most diabetes patients must carefully monitor their blood sugar levels and inject insulin multiple times per day, to help keep their blood sugar from getting too high. As a possible alternative to those injections, MIT researchers are developing an implantable device that contains insulin-producing cells. The device encapsulates the cells, protecting them from immune rejection, and it also carries an onboard oxygen generator to keep the cells healthy.

This device, the researchers hope, could offer a way to achieve long-term control of type 1 diabetes. In a new study, they showed that these encapsulated pancreatic islet cells could survive in the body for at least 90 days. In mice that received the implants, the cells remained functional and produced enough insulin to control the animals’ blood sugar levels.

“Islet cell therapy can be a transformative treatment for patients. However, current methods also require immune suppression, which for some people can be really debilitating,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science. “Our goal is to find a way to give patients the benefit of cell therapy without the need for immune suppression.”

Anderson is the senior author of the study, which is published in the journal Device. Former MIT research scientist Siddharth Krishnan, who is now an assistant professor of electrical engineering at Stanford University, and former MIT postdoc Matthew Bochenek are the lead authors of the paper. Robert Langer, the David H. Koch Institute Professor at MIT, is also a co-author.

Insulin on demand

Islet cell transplantation has already been used successfully to treat diabetes in patients. Those islet cells typically come from human cadavers, or more recently, can be generated from stem cells. In either case, patients must take immunosuppressive drugs to prevent their immune system from rejecting the transplanted cells.

Another way to prevent immune rejection is to encapsulate cells in a protective device. However, this raises new challenges, as the coating that surrounds the cells can prevent them from receiving enough oxygen.

In a 2023 study, Anderson and his colleagues reported an islet-encapsulation device that also carries an onboard oxygen generator. This generator consists of a proton-exchange membrane that can split water vapor (found abundantly in the body) into hydrogen and oxygen. The hydrogen diffuses harmlessly away, while oxygen goes into a storage chamber that feeds the islet cells through a thin, oxygen-permeable membrane.

Cells encapsulated within this device, they found, could produce insulin for up to a month after being implanted in mice.

“A month is a good timeframe in that it shows basic proof-of-concept. But from a translational standpoint, it’s important to show that you can go quite a bit longer than that,” Krishnan says.

In the new study, the researchers increased the lifespan of the devices by making them more waterproof and more resilient to cracking. They also improved the device electronics to deliver more power to the oxygen generator. The implant is powered wirelessly by an external antenna placed on the skin, which transfers energy to the device. By optimizing the circuitry, the researchers were able to increase the amount of power reaching the oxygen-generating system.

The additional power allowed the device to produce more oxygen, helping the encapsulated cells to survive and function more effectively. As a result, the cells were able to generate much more insulin over time.

Protein factories

In studies in rats and mice, the researchers showed that the new device could function for at least 90 days after being implanted under the skin. During this time, donor islet cells were able to produce enough insulin to keep the animals’ blood sugar levels within a healthy range.

The researchers saw similar results with islet cells derived from induced pluripotent stem cells, which could one day provide an indefinite supply that could be used for any patient who needs them. These islets didn’t fully reverse diabetes, but they did achieve some control of blood sugar levels.

“We’re hoping that in the future, if we can give the cells a little bit longer to fully mature, that they’ll secrete even more insulin to better regulate diabetes in the animals,” Bochenek says.

The researchers now plan to study whether they can get the devices to last for even longer in the body—up to two years, or longer.

“Long-term survival of the islets is an important goal,” Anderson says. “The cells, if they’re in the right environment, seem to be able to survive for a long time. We are excited by the duration we’ve already achieved, and we will be working to extend their function as long as possible.”

The researchers are also exploring the possibility of using this approach to deliver cells that could produce other useful proteins, such as antibodies, enzymes, or clotting factors.

“We think that these technologies could provide a long-term way to treat human disease by making drugs in the body instead of outside of the body,” Anderson says. “There are many protein therapies where patients must receive repeated, lengthy infusions. We think it may be possible to create a device that could continuously create protein therapeutics on demand and as needed by the patient.”

Frequent social media use could impact child development

Regular social media use across early adolescence is related to worse reading and vocabulary development over time, according to new research from the University of Georgia. The findings are published in the Journal of Research on Adolescence.

The study found that adolescents who used social media more often each day tend to struggle with recognizing and pronouncing words.

The new findings come just as Australia became the first country to ban children under 16 from using social media. As other countries consider similar measures, and social media platforms roll out age verification to restrict adolescents’ online activity, the study raises additional concerns on the impact of social media and screen use on childhood development, the researchers said.

“The brain is like a muscle. The more you use it, the more it changes according to how you’re using it,” said Cory Carvalho, lead author of the study who received his doctorate from the UGA College of Family and Consumer Sciences. “If you think of the Olympics, the figure skaters are really good at figure skating because they spend eight hours a day doing it. Their muscles are wired to be figure skating machines.

“If kids spend over eight hours a day using social media, that’s what their brains are going to adapt to and be wired for.”

Spending excessive time on social media linked to weaker reading skills, vocabulary

The study relied on longitudinal data from the ongoing Adolescent Brain Cognitive Development study, which follows more than 10,000 adolescents over six years starting around age 10.

The researchers found that frequent social media use was linked to struggles with reading and vocabulary across four years.

“There’s a time cost to social media use. If you’re spending time doing one thing, that means you’re not spending time doing another thing,” Carvalho said. “Other studies found that the more kids are using social media, the less they’re reading, so reading development lags behind. We also found this with their vocabulary.”

Weaker reading and vocabulary skills could impact a child’s school performance.

Children who used social media more often also struggled with attentional control across the same period. This could be because juggling multiple tasks and frequent notifications disrupt kids’ attention, but it’s also possible that adolescents who already struggle with focusing are more likely to use social media, the researchers said.

Kids who use social media more tend to process information faster

Not all the impacts of social media use were negative, though, the researchers said. Children who were on social media frequently processed information faster and had shorter reaction times. However, the researchers cautioned that these observed benefits may be limited to screen-based assessments of processing speed, like the one used in the study.

“It’s not necessarily that social media is having only these negative effects or only these positive effects,” said Niyantri Ravindran, co-author of the study and an assistant professor in the UGA College of Family and Consumer Sciences. “The negative effects on vocabulary and reading are more expected because social media is potentially depriving kids of opportunities to engage in some of those higher-level cognitive skills.”

Social media can also help children stay connected with others, especially if they’re in an environment where making friends is difficult, the researchers said.

Limiting screen time, waiting to get kids a smartphone could build better habits

To help combat those negative effects, the researchers suggest limiting screen time for adolescents, especially before bed. They also recommend waiting until kids are older to purchase a smartphone.

If parents do need to stay in touch with their kids, a “dumb phone” that can’t access social media could also be an option, the researchers said.

“Social media is new, so everybody’s trying to figure out what we do with this new paradigm,” Carvalho said. “Kids like it. Adults like it. And everybody uses it.

“What you’re going to see is that a lot of different states, countries and organizations are going to try different things. Hopefully, we settle on some norms that work for kids and not for profits.”

Adversarial AI framework reveals mechanisms behind impaired consciousness and a potential therapy

Consciousness, and the ways in which it can become impaired after certain brain injuries, are not well understood, making disorders of consciousness (DOC), like coma, vegetative states and minimally conscious states difficult to treat. But a new study, published in Nature Neuroscience, indicates that AI might be able to help researchers gain some traction with this problem. The research team involved in the new study has developed an adversarial AI framework to help them determine what exactly is going on in states of reduced consciousness and how to approach a solution.

Two AI models play a consciousness game

To better understand the mechanisms behind impaired consciousness, the researchers developed two types of AI models and had them play a kind of game where one model determined different levels of consciousness based on EEGs simulated to look like those of real unconscious and conscious brains. The AI agents guessing consciousness levels, called deep convolutional neural networks (DCNNs), were first trained on 680,000 ten-second recordings of brain activity from conscious and unconscious humans, monkeys, bats and rats to detect which neural signals related to differing levels of consciousness. The AI showing EEG data was a biologically plausible simulation of the human brain.

“To decode consciousness from these signals, we trained three separate DCNNs, each specialized for a different brain region, to output a continuous score from 0 (unconscious) to 1 (fully conscious): a cortical consciousness detector (ctx-DCNN), a thalamic consciousness detector (th-DCNN) and a pallidal consciousness detector (pal-DCNN). The ctx-DCNN was trained on continuous consciousness levels derived from clinical scales (GCS and CRS-R), enabling it to recognize graded states of consciousness,” the study authors explain.

Without explicit programming, the AI model was able to deduce known responses to brain stimulation that occur in DOC. The team then analyzed the parameters that the simulation model tweaked in order to find testable predictions about the underlying mechanisms of unconsciousness.

Revealing new mechanisms behind unconsciousness

The researchers say that the model predicted two previously unknown mechanisms for unconsciousness that they were able to validate. The first is an increased inhibitory-to-inhibitory neuron coupling in the cortex, in which more neurons are restraining the firing of other neurons. This results in reduced overall activity. The researchers were able to validate this prediction from RNA sequencing data of brain tissue from comatose patients and in data from rats with brain damage from strokes. The team found that those with impaired consciousness showed an upregulation of genes that drive cortical inhibitory synapse formation.

The AI model also predicted that those with impaired consciousness have a selective disruption of the basal ganglia indirect pathway—a neural circuit that increases inhibition of the thalamus, thereby suppressing unwanted movements and motor actions. To validate the prediction, the researchers analyzed diffusion tensor imaging (DTI) scans from 51 patients with different DOC disorders. They say their analysis provided supporting evidence for the plausibility of selective basal ganglia pathway disruption in pathological unconsciousness, although some limitations, like a lack of cell-type specificity in DTI, of the study warrant further validation studies.

A new target for ‘waking up’ the brain

Although deep-brain stimulation (DBS) has shown promise for DOC therapy in previous studies, the method suffered from a lack of clear mechanistic targets. But the AI model in this study seems to have identified high-frequency stimulation of the subthalamic nucleus (STN), in particular, as a promising target. And the team says there is good indication that this is a reasonable target. They cite a previous study on people with an implanted DBS device for a type of neck spasm, in which some patients had stimulation to the subthalamic nucleus. However, the patients in that study were conscious, so more work is needed to test out the theory.

“Critically, our framework offers a platform for in silico testing of DBS strategies in DOC. Across modeled cases, high-frequency (50–130 Hz) stimulation of the STN consistently increased the AI-predicted level of consciousness. In patients with cervical dystonia, who were fully conscious during both the DBS on and DBS off conditions, predicted consciousness levels were already nearly maximal but showed a consistent upward shift with stimulation. This indicates that STN DBS may push neural dynamics toward patterns classified as more conscious-like by the DCNN, rather than restoring lost consciousness in this context,” the study authors write.

The team hopes to refine their work in future studies and potentially test out whether high-frequency stimulation of the subthalamic nucleus can actually “wake up” patients with impaired consciousness. As for the adversarial AI framework, they say similar methods may be adapted for other complex brain disorders.

A Research Briefing on the work was also published in Nature Neuroscience.

Short-lived fish offer new insights into the aging immune system

Our immune system protects the body from infections and harmful changes throughout our lives. However, it loses its effectiveness with age, resulting in an increased risk of disease. But what happens when the immune system ages—and can this process possibly be stopped?

In a study now published in Nature Aging as a cover article, researchers at the Leibniz Institute on Aging—Fritz Lipmann Institute (FLI) have taken an important step towards answering these questions. They used the extremely short lifespan of the turquoise killifish (Nothobranchius furzeri) and identified key characteristics of immune aging within a few weeks. This makes this model particularly well suited for rapid mechanistic discoveries and testing potential interventions.

The study combines various analytical methods, such as cytometry, single-cell transcriptomics, proteomics, AI-supported image classification, in situ imaging, histology, and functional immunoassays. With the newly established open multi-omics resource KIAMO, it thus provides a comprehensive overview of immune aging in a short-lived vertebrate. The work began at the Max Planck Institute for Biology of Aging (MPI-AGE) in Cologne and was later continued at the FLI in Jena.

The researchers show that key features of immune aging are present in killifish and are strikingly similar to those seen in mammals and humans. The study provides unique insights into the mechanisms of so-called “immune aging.” Since killifish only live for a few months, aging processes can be observed in fast motion within a few weeks—a major advantage for experimental research.

“The killifish system once again surprises us as it reveals that key aspects of immune aging—both at molecular and cellular level—are deeply conserved evolutionary,” says Prof. Dario Riccardo Valenzano, pioneer of killifish research and Scientific Director at the FLI. “Our findings prove that killifish could be an optimally suited model to test interventions that, by targeting immune aging, improve systemic aging.”

Inflammatory processes increase with age

One of the central findings of the study is the presence of a pronounced systemic inflammatory signature in older fish, often referred to as “inflammaging.” Blood analyses revealed increased levels of acute-phase proteins as well as markers of metabolic imbalance. Similar inflammatory signatures are well known in aging mammals and humans and are associated with a wide range of age-related diseases.

Age-related changes were particularly evident in the kidney marrow, the main hematopoietic organ in fish and the functional counterpart of mammalian bone marrow. With increasing age, the researchers observed structural remodeling, fibrosis, tissue alterations, and shifts in immune-cell populations.

At the same time, the data indicates an expansion of progenitor and stem-like immune cells. However, these cells accumulate DNA double-strand damage and show reduced markers of active DNA repair. Importantly, this accumulation of DNA damage cannot be explained by replication alone, suggesting a state consistent with cellular senescence and impaired differentiation capacity.

“I have always been fascinated by the idea that biological processes, including aging, follow principles that can be understood and eventually translated into interventions. Rather than accepting decline as inevitable, the Killifish model gives us a way to dissect aging mechanisms in a compressed time frame, while recapitulating key aspects of immune aging seen in mammals,” explains Gabriele Morabito, Ph.D. student and first author of the study.

Impaired immune response in old age

Functional experiments confirmed these observations. Immune cells isolated from older killifish responded significantly less strongly to bacterial stimulation than cells from young animals.

In cell-culture experiments, pre-treatment with a senolytic partially restored youthful immune responses in vitro, indicating that senescent cells may contribute to the functional decline of the aging immune system.

This suggests that senescent cells actively contribute to the age-related impairment of the immune response. The killifish therefore represents a promising model to test interventions targeting immune aging.

New open resource for research community

Alongside the study, the researchers established a publicly accessible multi-omics platform called KIAMO (Killifish Immune Aging Multi-Omics). The platform provides the international research community with extensive molecular datasets, including single-cell gene-expression profiles, proteomics data, and imaging resources.

Although the study provides detailed insights into immune aging in the hematopoietic system, important questions remain. It is still unclear how strongly these changes influence aging processes in other organs.

However, the killifish offers a unique opportunity to experimentally investigate these relationships, according to Prof. Valenzano. With its short lifespan, conserved immune biology, and the newly established KIAMO resource, the turquoise killifish provides a powerful experimental platform to study immune aging in vertebrates and to accelerate the development of strategies aimed at improving health during aging.

It may be too soon to scrap Daylight Saving Time, suggests research

Ahead of the beginning of Daylight Saving Time (DST) on 26 March, a comprehensive international review by researchers at the University of Kent has highlighted the complex arguments for and against scrapping the twice-yearly clock change, and the need for more evidence before a decision can be made. Calls to scrap Daylight Saving Time have intensified in recent years with campaigners often emphasizing the negative consequences it has on public health and well-being in the UK. However, a review of 157 studies from 36 countries led by the Medway School of Pharmacy in partnership with researchers at the University of Cologne suggests that this simple messaging can be misleading.

The research team conducted a systematic review following international reporting guidelines and searched five major scientific databases—PubMed, Web of Science, Scopus, PsycINFO, and EconLit—for studies published up to June 2025. Human studies assessing either short-term effects of clock changes or comparisons between daylight saving time and standard time were included. The work is published in the European Journal of Epidemiology.

The review revealed that when clocks “spring forward,” the shift is associated with an increased number of heart attacks and fatal traffic accidents, but also with less crime involving physical harm. On the contrary, when clocks “fall back” by an hour in the autumn, all-cause mortality and workplace accidents appear to reduce, while crimes involving physical harm increase.

When examining longer periods rather than transition periods alone, the researchers found that living under daylight saving time during summer months was associated with lower all-cause mortality and fewer traffic accidents compared with standard time.

In contrast, standard time during winter may be associated with shorter sleep duration, although evidence for broader sleep and circadian rhythm effects remains limited. The review found no clear or consistent evidence linking daylight saving time to psychiatric outcomes.

Despite the number of studies reviewed, the researchers found the evidence within them was limited and have emphasized the need for more robust research before firm conclusions can be made about the cost-benefit of Daylight Saving Time.

As Dr. Aiste Steponenaite, the Sleep & Circadian Neuroscientist at the University of Kent that led the study, explains: “Public debate often frames daylight saving time as either clearly harmful or clearly beneficial but our findings suggest the reality is more nuanced. This work shows why simple headlines do not capture the full picture. Policymakers deserve evidence that reflects both risks and benefits—not assumptions.”