Blood test identifies HPV-associated head and neck cancers up to 10 years before symptoms

Human papillomavirus (HPV) causes around 70% of head and neck cancers in the United States, making it the most common cancer caused by the virus, with rates increasing each year. Unlike cervical cancers caused by HPV, there is no screening test for HPV-associated head and neck cancers. This means that patients are usually diagnosed after a tumor has grown to billions of cells in size, causing symptoms and spreading to lymph nodes. Screening methods that can detect these cancers much earlier could mean earlier treatment interventions for patients.

In a study published in the Journal of the National Cancer Institute, Mass General Brigham researchers show that a novel liquid biopsy tool they developed, called HPV-DeepSeek, can identify HPV-associated head and neck cancer up to 10 years before symptoms appear. By catching cancers earlier with this novel test, patients may experience higher treatment success and require a less intense regimen, according to the authors.

“Our study shows for the first time that we can accurately detect HPV-associated cancers in asymptomatic individuals many years before they are ever diagnosed with cancer,” said lead study author Daniel L. Faden, MD, FACS, a head and neck surgical oncologist and principal investigator in the Mike Toth Head and Neck Cancer Research Center at Mass Eye and Ear, a member of the Mass General Brigham health care system.

“By the time patients enter our clinics with symptoms from the cancer, they require treatments that cause significant, lifelong side effects. We hope tools like HPV-DeepSeek will allow us to catch these cancers at their very earliest stages, which ultimately can improve patient outcomes and quality of life.”

HPV-DeepSeek uses whole-genome sequencing to detect microscopic fragments of HPV DNA that have broken off from a tumor and entered the bloodstream. Previous research from this team showed the test could achieve 99% specificity and 99% sensitivity for diagnosing cancer at the first time of presentation to a clinic, outperforming current testing methods.

To determine whether HPV-DeepSeek could detect HPV-associated head and neck cancer long before diagnosis, researchers tested 56 samples from the Mass General Brigham Biobank: 28 from individuals who went on to develop HPV-associated head and neck cancer years later, and 28 from healthy controls.

HPV-DeepSeek detected HPV tumor DNA in 22 out of 28 blood samples from patients who later developed the cancer, whereas all 28 control samples tested negative, indicating that the test is highly specific. The test was better able to detect HPV DNA in blood samples that were collected closer to the time of the patients’ diagnosis, and the earliest positive result was for a blood sample collected 7.8 years prior to diagnosis.

Using machine learning, the researchers were able to improve the test’s power so that it accurately identified 27 out of 28 cancer cases, including samples collected up to 10 years prior to diagnosis.

The authors are now validating these findings in a second blinded study using hundreds of samples collected as part of the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial (PLCO) at the National Cancer Institute.

Breast cell changes in motherhood provides clues to breastfeeding difficulties

In a study in mice, researchers have identified genes associated with the dramatic transformation of the mammary gland in pregnancy, breastfeeding, and after breastfeeding as it returns to its resting state.

Their results form the most detailed atlas of genetic expression ever produced for the adult developmental cycle of the mammary gland. They are published in the journal Nucleic Acids Research.

The mammary gland is made up of different cell types, each with a different function—such as fat cells that provide structural support, and basal cells that are crucial for milk ejection.

The team analyzed the cellular composition of the mammary gland at ten different time-points from before the first pregnancy, during pregnancy, during breastfeeding, and during a process called involution when the breast tissue is remodeled to its resting state. The mix of cell types changes dramatically through this cycle.

By measuring gene expression in the mammary gland over the same time-points, the researchers were able to link specific genes to their functions at different stages of the developmental cycle.

“Our atlas is the most detailed to date, allowing us to see which genes are expressed in which cell types at each stage of the adult mammary gland cycle,” said Dr. Geula Hanin, a researcher in the University of Cambridge’s Department of Genetics, first author of the report.

The team found that genes associated with breastfeeding disorders such as insufficient milk supply are active not only in the breast cells that produce milk, but also in other cells such as basal cells—which squeeze out the milk as the infant is suckling.

This suggests that in some instances, a mechanical problem—rather than a milk production problem—could be the cause and provides a new cell target for investigation.

The study also found that genes associated with postpartum breast cancer become active immediately after weaning in various cell types—including in fat cells, which have previously been overlooked as contributors to breast cancer linked to childbirth. This offers a future potential target for early detection or prevention strategies.

Hanin said, “We’ve found that genes associated with problems in milk production, often experienced by breastfeeding mothers, are acting in breast cells that weren’t previously considered relevant for milk production. We’ve found genes associated with postpartum breast cancer acting in cells that have been similarly overlooked.

“This work provides many potential new ways of transforming maternal and infant health, by using genetic information to both predict problems with breastfeeding and breast cancer, and to tackle them further down the line.”

Breastfeeding affects lifelong health, for example, breast-fed babies are less likely to become obese and diabetic. Yet one in twenty women have breastfeeding difficulties, and despite its importance, this is a greatly understudied area of women’s health.

Postpartum breast cancer occurs within five to ten years of giving birth and is linked to hormonal fluctuations, natural tissue remodeling, and the changing environment of the mammary gland during involution that makes it more susceptible to malignancy.

The researchers also focused on “imprinted genes”—that is, genes that are switched on or off depending on whether they are inherited from the mother or the father. Imprinted genes in the placenta are known to regulate growth and development of the baby in the womb.

The team identified 25 imprinted genes that are active in the adult mammary gland at precise times during the development cycle. These appear to orchestrate a tightly controlled system for managing milk production and breast tissue changes during motherhood.

Some functions of the genes themselves have been identified in previous studies. This new work provides a detailed understanding of when, and where, the genes become active to cause changes in mammary gland function during its adult development cycle.

“Breastfeeding is a fundamental process that’s common to all mammals; we wouldn’t have survived without it. I hope this work will lead to new ways to support mothers who have issues with breastfeeding, so they have a better chance of succeeding,” said Hanin.

Hanin co-leads the Cambridge Lactation Network and is a member of Cambridge Reproduction.

Squishy ‘smart cartilage’ could target arthritis pain as soon as flareups begin

Researchers have developed a material that can sense tiny changes within the body, such as during an arthritis flareup, and release drugs exactly where and when they are needed.

The squishy material can be loaded with anti-inflammatory drugs that are released in response to small changes in pH in the body. During an arthritis flareup, a joint becomes inflamed and slightly more acidic than the surrounding tissue.

The material, developed by researchers at the University of Cambridge, has been designed to respond to this natural change in pH. As acidity increases, the material becomes softer and more jelly-like, triggering the release of drug molecules that can be encapsulated within its structure. Since the material is designed to respond only within a narrow pH range, the team says that drugs could be released precisely where and when they are needed, potentially reducing side effects.

If used as artificial cartilage in arthritic joints, this approach could allow for the continuous treatment of arthritis, improving the efficacy of drugs to relieve pain and fight inflammation. Arthritis affects more than 10 million people in the UK, costing the NHS an estimated £10.2 billion annually. Worldwide, it is estimated to affect over 600 million people.

While extensive clinical trials are needed before the material can be used in patients, the researchers say their approach could improve outcomes for people with arthritis, and for those with other conditions including cancer. Their results are reported in the Journal of the American Chemical Society.

The material developed by the Cambridge team uses specially engineered and reversible cross-links within a polymer network. The sensitivity of these links to changes in acidity levels gives the material highly responsive mechanical properties.

The material was developed in Professor Oren Scherman’s research group in Cambridge’s Yusuf Hamied Department of Chemistry. The group specializes in designing and building these unique materials for a range of potential applications.

“For a while now, we’ve been interested in using these materials in joints, since their properties can mimic those of cartilage,” said Scherman, who is Professor of Supramolecular and Polymer Chemistry and Director of the Melville Laboratory for Polymer Synthesis. “But to combine that with highly targeted drug delivery is a really exciting prospect.”

“These materials can ‘sense’ when something is wrong in the body and respond by delivering treatment right where it’s needed,” said first author Dr. Stephen O’Neill. “This could reduce the need for repeated doses of drugs, while improving patient quality of life.”

Unlike many drug delivery systems that require external triggers such as heat or light, this one is powered by the body’s own chemistry. The researchers say this could pave the way for longer-lasting, targeted arthritis treatments that automatically respond to flareups, boosting effectiveness while reducing harmful side effects.

In laboratory tests, researchers loaded the material with a fluorescent dye to mimic how a real drug might behave. They found that at acidity levels typical of an arthritic joint, the material released substantially more drug cargo compared with normal, healthy pH levels.

“By tuning the chemistry of these gels, we can make them highly sensitive to the subtle shifts in acidity that occur in inflamed tissue,” said co-author Dr. Jade McCune. “That means drugs are released when and where they are needed most.”

The researchers say the approach could be tailored to a range of medical conditions, by fine-tuning the chemistry of the material.

“It’s a highly flexible approach, so we could, in theory, incorporate both fast-acting and slow-acting drugs, and have a single treatment that lasts for days, weeks or even months,” said O’Neill.

The team’s next steps will involve testing the materials in living systems to evaluate their performance and safety in a physiological environment. The team says that if it is successful, their approach could open the door to a new generation of responsive biomaterials capable of treating chronic diseases with greater precision.

Estrogen receptor loss in kidney cells may trigger preeclampsia

University of Florence–led investigators report that estrogen-regulated renal progenitor cells shape pregnancy adaptation in mice with failure of estrogen receptor alpha signaling precipitating preeclampsia, maternal kidney injury, and offspring vulnerability to hypertension and chronic kidney disease.

Preeclampsia complicates ~5% of pregnancies and is associated with later-life hypertension and chronic kidney disease (CKD) for mothers and their children.

CKD affects over 10% of the global population and raises cardiovascular health risks. Males are associated with faster CKD progression, while loss of female sex hormones in postmenopausal or ovariectomized women links to higher CKD and cardiovascular events.

Researchers wanted to determine if sex differences in CKD progression might be related to kidney structural adaptations to the workload imposed by pregnancy.

In the study, “Estrogen-regulated renal progenitors determine pregnancy adaptation and preeclampsia,” published in Science, researchers used lineage tracing and single-cell RNA-sequencing to test whether estrogen signaling in renal progenitors supports podocyte generation and modulates susceptibility to glomerular injury and preeclampsia.

Animal experiments used female mice with selective estrogen receptor alpha (ERα) deletion in renal progenitors, and male mice as wild-type comparators. Human work included primary human renal progenitor cell cultures, analyses of human kidney biopsies, and urine-derived renal progenitor cultures from pregnant women but not from healthy controls.

Mouse studies showed a larger pool of renal progenitor cells after puberty, with cells moving into the kidney’s filtering units and maturing into podocytes.

In females, about one in 10 of these filters showed migrating progenitors, compared with only about one in 100 in males, and by day 120 females carried more podocytes. Removing ERα in mouse renal progenitors erased this female advantage, reduced podocyte coverage, and brought urine protein and blood pressure in line with male levels.

Human evidence mirrored those patterns. Primary renal progenitors from people exposed to 17β-estradiol and progesterone increased in number and matured more readily, with estradiol acting mainly through ERα at concentrations typical of ovulation and pregnancy. Kidney tissue from young women contained more progenitors and more podocytes than tissue from postmenopausal women or from men.

Disease models reinforced the female advantage. In chemically-induced kidney injury in mice, females showed less podocyte damage, lower protein loss in urine, and better kidney function than males. Estradiol improved outcomes in males. Loss of ERα in female progenitors reduced the appearance of new progenitor-derived podocytes.

Pregnancy experiments described expansion of kidney progenitors and new podocyte generation in wild-type dams, with reductions after ERα loss. ERα-deficient dams developed hypertension, progressive protein leakage, smaller rises in kidney filtration rate, and higher blood urea nitrogen, consistent with preeclampsia and impaired kidney function.

A comparison model that targeted blood vessel tone produced pregnancy complications of similar severity that resolved after delivery, while ERα-deficient dams remained hypertensive with chronic kidney disease.

Placentas from ERα-deficient pregnancies were smaller and more fibrotic and produced more soluble FMS-like tyrosine kinase-1. Maternal L-tryptophan fell in ERα-deficient pregnancies.

Litters were smaller, and pups were born with reduced body size and kidney weight. Offspring later developed high blood pressure and higher urinary protein by day 120, carried fewer kidney filters at birth, and showed worse injury and function after chemically induced kidney damage, with the most severe disease in males.

Authors conclude that estrogen-driven kidney progenitors expand podocyte reserves during reproductive life and pregnancy in females, lowering risks of albuminuria and hypertension.

Failure of this mechanism contributes to preeclampsia, postpartum maternal chronic kidney disease, and fewer kidney filters (nephrons) in offspring with greater lifelong vulnerability.

Higher blood pressure in childhood linked to earlier death from heart disease in adulthood

Blood pressure matters at all ages. Children with higher blood pressure at age 7 may be at an increased risk of dying of cardiovascular disease by their mid-50s, according to preliminary research presented at the American Heart Association’s Hypertension Scientific Sessions 2025, held in Baltimore, September 4–7, 2025.

The study is simultaneously published in JAMA.

“We were surprised to find that high blood pressure in childhood was linked to serious health conditions many years later. Specifically, having hypertension or elevated blood pressure as a child may increase the risk of death by 40% to 50% over the next five decades of an individual’s life,” said Alexa Freedman, Ph.D., lead author of the study and an assistant professor in the department of preventive medicine at the Northwestern University’s Feinberg School of Medicine in Chicago.

“Our results highlight the importance of screening for blood pressure in childhood and focusing on strategies to promote optimal cardiovascular health beginning in childhood.”

Previous research has shown that childhood blood pressure is associated with an increased risk of cardiovascular disease in adulthood, and a 2022 study found that elevated blood pressure in older children (average age of 12 years) increased the risk of cardiovascular death by middle age (average age of 46 years).

The current study is the first to investigate the impact of both systolic (top number) and diastolic (bottom number) blood pressure in childhood on long-term cardiovascular death risk in a diverse group of children. Clinical practice guidelines from the American Academy of Pediatrics recommend checking blood pressure at annual well-child pediatric appointments starting at age 3 years.

“The results of this study support monitoring blood pressure as an important metric of cardiovascular health in childhood,” said Bonita Falkner, M.D., FAHA, an American Heart Association volunteer expert.

“Moreover, the results of this study and other older child cohort studies with potential follow-up in adulthood will contribute to a more accurate definition of abnormal blood pressure and hypertension in childhood.”

Falkner, who was not involved in this study, is emeritus professor of pediatrics and medicine at Thomas Jefferson University.

The researchers used the National Death Index to follow up on the survival or cause of death as of 2016 for approximately 38,000 children who had their blood pressures taken at age 7 years as part of the Collaborative Perinatal Project (CPP), the largest U.S. study to document the influence of pregnancy and postnatal factors on the health of children.

Blood pressure measured in the children at age 7 years were converted to age-, sex-, and height-specific percentiles according to the American Academy of Pediatrics clinical practice guidelines.

The analysis accounted for demographic factors as well as for childhood body mass index, to ensure that the findings were related to childhood blood pressure itself rather than a reflection of children who were overweight or had obesity.

After follow-up through an average age of 54 years, the analysis found:

Children who had higher blood pressure (age-, sex-, and height-specific systolic or diastolic blood pressure percentile) at age 7 were more likely to die early from cardiovascular disease as adults by their mid-50s. The risk was highest for children whose blood pressure measurements were in the top 10% for their age, sex and height.

By 2016, a total of 2,837 participants died, with 504 of those deaths attributed to cardiovascular disease.

Both elevated blood pressure (90-94th percentile) and hypertension (≥95th percentile) were linked with about a 40% to 50% higher risk of early cardiovascular death in adulthood.

Moderate elevations in blood pressure were also important, even among children whose blood pressure was still within the normal range. Children who had blood pressures that were moderately higher than average had a 13% (for systolic) and 18% (for diastolic) higher risk of premature cardiovascular death.

Analysis of the 150 clusters of siblings in the CPP found that children with the higher blood pressure at age 7 had similar increases in risk of cardiovascular death when compared to their siblings with the lower blood pressure readings (15% increase for systolic and 19% for diastolic), indicating that their shared family and early childhood environment could not fully explain the impact of blood pressure.

“Even in childhood, blood pressure numbers are important because high blood pressure in children can have serious consequences throughout their lives. It is crucial to be aware of your child’s blood pressure readings,” Freedman said.

The study has several limitations, primarily that the analysis included one single blood pressure measurement for the children at age seven, which may not capture variability or long-term patterns in childhood blood pressure. In addition, participants in the CPP were primarily Black or white, therefore the study’s findings may not be generalizable to children of other racial or ethnic groups.

Also, children today are likely to have different lifestyles and environmental exposures than the children who participated in the CPP in the 1960s and 1970s.

Study details, background and design:

38,252 children born to mothers enrolled at one of 12 sites across the U.S. as part of the Collaborative Perinatal Project between 1959-1965. 50.7% of participants were male; 49.4% of mothers self-identified as Black, 46.4% reported as white; and 4.2% of participants were Hispanic, Asian or other groups.

This analysis reviewed blood pressure taken at age 7, and these measures were converted to age-, sex-, and height-specific percentiles according to the American Academy of Pediatrics Clinical Practice Guideline for Screening and Management of High Blood Pressure in Children and Adolescents.

Survival through 2016 and the cause of death for the offspring of CPP participants in adulthood were retrieved through the National Death Index.

Survival analysis was used to estimate the association between childhood blood pressure and cardiovascular death, adjusted for childhood body mass index, study site, and mother’s race, education and marital status.

In addition, the sample included 150 groups of siblings, and the researchers examined whether the sibling with higher blood pressure was more likely to die of cardiovascular disease than the sibling with lower blood pressure. This sibling analysis allowed researchers to ask how much shared family and early childhood factors might account for the mortality risk related to blood pressure.

Human brains explore more to avoid losses than to seek gains

Researchers at the Weizmann Institute of Science traced a neural mechanism that explains why humans explore more aggressively when avoiding losses than when pursuing gains. Their work reveals how neuronal firing and noise in the amygdala shape exploratory decision-making.

Human survival has its origins in a delicate balance of exploration versus exploitation. There is safety in exploiting what is known, the local hunting grounds, the favorite foraging location, the go-to deli with the familiar menu. Exploitation also involves the risk of over-reliance on the familiar to the point of becoming too dependent upon it, either through depletion or a change in the stability of local resources.

Exploring the world in the hope of discovering better options has its own set of risks and rewards. There is the chance of finding plentiful hunting grounds, alternative foraging resources, or a new deli that offers a fresh take on old favorites. And there is the risk that new hunting grounds will be scarce, the newly foraged berries poisonous, or that the meal time will be ruined by a deli that disappoints.

Exploration-exploitation (EE) dilemma research has concentrated on gain-seeking contexts, identifying exploration-related activity in surface brain areas like cortex and in deeper regions such as the amygdala. Exploration tends to rise when people feel unsure about which option will pay off.

Loss-avoidance strategies differ from gain-seeking strategies, with links to negative outcomes such as PTSD, anxiety and mood disorders.

In the study, “Rate and noise in human amygdala drive increased exploration in aversive learning,” published in Nature, researchers recorded single-unit activity to compare exploration during gain versus loss learning.

Seventeen epilepsy patients already implanted with clinical depth electrodes took part. Recordings captured 382 neurons across 22 sessions, mainly in the amygdala and nearby temporal cortex.

Participants played a two-choice game with intermixed trial types. A tone signaled a gain or a loss trial. Each trial showed two geometric shapes with different chances of an outcome, 70% vs. 30%. Gain trials yielded scored +10 or 0, while loss trials yielded −5 or 0.

Researchers examined how strongly neurons fired just before each choice and how variable that activity was across trials. Variability served as “neural noise.” Computational models estimated how closely choices followed expected value and how much they reflected overall uncertainty.

Results showed more exploration in loss trials than in gain trials. After people learned which option was better, exploration stayed higher in loss trials and accuracy fell more from its peak.

Pre-choice firing in the amygdala and temporal cortex rose before exploratory choices in both gain and loss, indicating a shared, valence-independent rate signal. Loss trials also showed noisier amygdala activity from cue to choice.

More noise was linked to higher uncertainty and a higher chance of exploring, and noise declined as learning progressed. Using the measured noise levels in decision models reproduced the extra exploration seen in loss trials. Loss aversion did not explain the gap.

Researchers report two neural signals that shape exploration. A valence-independent firing-rate increase in the amygdala and temporal cortex precedes exploratory choices in both gain and loss. A loss-specific rise in amygdala noise raises the odds of exploration under potential loss, scales with uncertainty, and wanes as learning accrues.

Behavioral modeling matches this pattern, with value-only rules fitting gain choices and value-plus-total-uncertainty rules fitting loss choices. Findings point to neural variability as a lever that tilts strategy toward more trial-and-error when loss looms, while causal tests that manipulate noise remain to be done.

From an evolutionary survival perspective, the strategy fits well with the need to seek out new resources when facing the loss of safe or familiar choices. While one might consider trying a new restaurant at any time, true seeking behavior will become a priority if the favorite location is closed for remodeling.

How ‘brain cleaning’ while we sleep may lower our risk of dementia

The brain has its own waste disposal system—known as the glymphatic system—that’s thought to be more active when we sleep.

But disrupted sleep might hinder this waste disposal system and slow the clearance of waste products or toxins from the brain. And researchers are proposing a build-up of these toxins due to lost sleep could increase someone’s risk of dementia.

There is still some debate about how this glymphatic system works in humans, with most research so far in mice.

But it raises the possibility that better sleep might boost clearance of these toxins from the human brain and so reduce the risk of dementia.

Here’s what we know so far about this emerging area of research.

Why waste matters

All cells in the body create waste. Outside the brain, the lymphatic system carries this waste from the spaces between cells to the blood via a network of lymphatic vessels.

But the brain has no lymphatic vessels. And until about 12 years ago, how the brain clears its waste was a mystery. That’s when scientists discovered the “glymphatic system” and described how it “flushes out” brain toxins.

Let’s start with cerebrospinal fluid, the fluid that surrounds the brain and spinal cord. This fluid flows in the areas surrounding the brain’s blood vessels. It then enters the spaces between the brain cells, collecting waste, then carries it out of the brain via large draining veins.

Scientists then showed in mice that this glymphatic system was most active—with increased flushing of waste products—during sleep.

One such waste product is amyloid beta (Aβ) protein. Aβ that accumulates in the brain can form clumps called plaques. These, along with tangles of tau protein found in neurons (brain cells), are a hallmark of Alzheimer’s disease, the most common type of dementia.

In humans and mice, studies have shown that levels of Aβ detected in the cerebrospinal fluid increase when awake and then rapidly fall during sleep.

But more recently, another study (in mice) showed pretty much the opposite—suggesting the glymphatic system is more active in the daytime. Researchers are debating what might explain the findings.

So we still have some way to go before we can say exactly how the glymphatic system works—in mice or humans—to clear the brain of toxins that might otherwise increase the risk of dementia.

Does this happen in humans too?

We know sleeping well is good for us, particularly our brain health. We are all aware of the short-term effects of sleep deprivation on our brain’s ability to function, and we know sleep helps improve memory.

In one experiment, a single night of complete sleep deprivation in healthy adults increased the amount of Aβ in the hippocampus, an area of the brain implicated in Alzheimer’s disease. This suggests sleep can influence the clearance of Aβ from the human brain, supporting the idea that the human glymphatic system is more active while we sleep.

This also raises the question of whether good sleep might lead to better clearance of toxins such as Aβ from the brain, and so be a potential target to prevent dementia.

How about sleep apnea or insomnia?

What is less clear is what long-term disrupted sleep, for instance if someone has a sleep disorder, means for the body’s ability to clear Aβ from the brain.

Sleep apnea is a common sleep disorder when someone’s breathing stops multiple times as they sleep. This can lead to chronic (long-term) sleep deprivation, and reduced oxygen in the blood. Both may be implicated in the accumulation of toxins in the brain.

Sleep apnea has also been linked with an increased risk of dementia. And we now know that after people are treated for sleep apnea more Aβ is cleared from the brain.

Insomnia is when someone has difficulty falling asleep and/or staying asleep. When this happens in the long term, there’s also an increased risk of dementia. However, we don’t know the effect of treating insomnia on toxins associated with dementia.

So again, it’s still too early to say for sure that treating a sleep disorder reduces your risk of dementia because of reduced levels of toxins in the brain.

So where does this leave us?

Collectively, these studies suggest enough good quality sleep is important for a healthy brain, and in particular for clearing toxins associated with dementia from the brain.

But we still don’t know if treating a sleep disorder or improving sleep more broadly affects the brain’s ability to remove toxins, and whether this reduces the risk of dementia. It’s an area researchers, including us, are actively working on.

For instance, we’re investigating the concentration of Aβ and tau measured in blood across the 24-hour sleep-wake cycle in people with sleep apnea, on and off treatment, to better understand how sleep apnea affects brain cleaning.

Researchers are also looking into the potential for treating insomnia with a class of drugs known as orexin receptor antagonists to see if this affects the clearance of Aβ from the brain.

If you’re concerned

This is an emerging field and we don’t yet have all the answers about the link between disrupted sleep and dementia, or whether better sleep can boost the glymphatic system and so prevent cognitive decline.

So if you are concerned about your sleep or cognition, please see your doctor.

Human brains explore more to avoid losses than to seek gains

Researchers at the Weizmann Institute of Science traced a neural mechanism that explains why humans explore more aggressively when avoiding losses than when pursuing gains. Their work reveals how neuronal firing and noise in the amygdala shape exploratory decision-making.

Human survival has its origins in a delicate balance of exploration versus exploitation. There is safety in exploiting what is known, the local hunting grounds, the favorite foraging location, the go-to deli with the familiar menu. Exploitation also involves the risk of over-reliance on the familiar to the point of becoming too dependent upon it, either through depletion or a change in the stability of local resources.

Exploring the world in the hope of discovering better options has its own set of risks and rewards. There is the chance of finding plentiful hunting grounds, alternative foraging resources, or a new deli that offers a fresh take on old favorites. And there is the risk that new hunting grounds will be scarce, the newly foraged berries poisonous, or that the meal time will be ruined by a deli that disappoints.

Exploration-exploitation (EE) dilemma research has concentrated on gain-seeking contexts, identifying exploration-related activity in surface brain areas like cortex and in deeper regions such as the amygdala. Exploration tends to rise when people feel unsure about which option will pay off.

Loss-avoidance strategies differ from gain-seeking strategies, with links to negative outcomes such as PTSD, anxiety and mood disorders.

In the study, “Rate and noise in human amygdala drive increased exploration in aversive learning,” published in Nature, researchers recorded single-unit activity to compare exploration during gain versus loss learning.

Seventeen epilepsy patients already implanted with clinical depth electrodes took part. Recordings captured 382 neurons across 22 sessions, mainly in the amygdala and nearby temporal cortex.

Participants played a two-choice game with intermixed trial types. A tone signaled a gain or a loss trial. Each trial showed two geometric shapes with different chances of an outcome, 70% vs. 30%. Gain trials yielded scored +10 or 0, while loss trials yielded −5 or 0.

Researchers examined how strongly neurons fired just before each choice and how variable that activity was across trials. Variability served as “neural noise.” Computational models estimated how closely choices followed expected value and how much they reflected overall uncertainty.

Results showed more exploration in loss trials than in gain trials. After people learned which option was better, exploration stayed higher in loss trials and accuracy fell more from its peak.

Pre-choice firing in the amygdala and temporal cortex rose before exploratory choices in both gain and loss, indicating a shared, valence-independent rate signal. Loss trials also showed noisier amygdala activity from cue to choice.

More noise was linked to higher uncertainty and a higher chance of exploring, and noise declined as learning progressed. Using the measured noise levels in decision models reproduced the extra exploration seen in loss trials. Loss aversion did not explain the gap.

Researchers report two neural signals that shape exploration. A valence-independent firing-rate increase in the amygdala and temporal cortex precedes exploratory choices in both gain and loss. A loss-specific rise in amygdala noise raises the odds of exploration under potential loss, scales with uncertainty, and wanes as learning accrues.

Behavioral modeling matches this pattern, with value-only rules fitting gain choices and value-plus-total-uncertainty rules fitting loss choices. Findings point to neural variability as a lever that tilts strategy toward more trial-and-error when loss looms, while causal tests that manipulate noise remain to be done.

From an evolutionary survival perspective, the strategy fits well with the need to seek out new resources when facing the loss of safe or familiar choices. While one might consider trying a new restaurant at any time, true seeking behavior will become a priority if the favorite location is closed for remodeling.

Online testing uncovers a common multiple sclerosis subtype with hidden cognitive deficits

King’s College London and Imperial College London, in collaboration with the UK MS Register, report a prevalent multiple sclerosis (MS) subtype marked by significant cognitive deficits with minimal motor impairment, a form of disability the authors state is currently unrecognized and untreated.

Cognitive problems in MS affect 40–70% of patients, can appear at any disease stage and across subtypes, and often affect work and quality of life more than physical impairment.

Standard neuropsychological assessments can be lengthy (several hours long) or narrowly focused. Cognition is not routinely measured in MS clinical trials where the focus tends to prioritize more obvious physical signs of the disease. Absence of reliable routine assessments excludes cognition from being part of phenotype definitions, making the impact, prevalence and severity of cognitive deficits unclear.

In the study, “Large-scale online assessment uncovers a distinct Multiple Sclerosis subtype with selective cognitive impairment,” published in Nature Communications, researchers optimized and deployed an automated online assessment to characterize MS-related cognitive deficits at population scale and to derive symptom-based phenotypes.

A total of 4,526 UK MS Register members participated across three stages. Stage 1 invited 19,188 registrants, with 3,066 engaging. Stage 2 recorded 2,696 engagements, including 1,425 first-time participants for independent validation. Stage 3 enrolled 31 patients for in-person comparison with a standard neuropsychological assessment with trained examiners.

Stage 1 evaluated 22 online tasks on the Cognitron platform to gauge feasibility and MS discriminability, then selected a 12-task battery based on effect sizes, device sensitivity, and factor analysis spanning six latent domains. Accuracy was prioritized as the primary metric for most tasks due to weaker correlations with motor patient-reported outcomes.

Stage 2 administered the 12-task battery plus two object memory tasks and replicated discriminability in an independent cohort.

Stage 3 compared the online battery with a comprehensive in-person assessment and included the Nine-Hole Peg Test to index hand motor function.

Stage 1 and Stage 2 yielded high feasibility of use with over 70% finishing in a median single sitting of roughly 40 minutes.

Four symptom-based groups emerged when online thinking measures were combined with two motor questionnaires in Stage 2.

  • Minimal Motor + Moderate Cognitive—little day-to-day movement difficulty paired with clear problems in memory, reasoning, task switching, and attention. Group size reached 26.0%, with 44.6% classified as cognitively impaired.
  • Minimal Motor + No Cognitive—little movement difficulty and broadly intact thinking skills, apart from slower performance on a single task. Group size reached 25.5%, with 1% classified as impaired.
  • Severe Motor + Mild Cognitive—marked movement difficulty with smaller thinking problems. Group size reached 34.2%, with 28.3% classified as impaired.
  • Severe Motor + Severe Cognitive—marked movement difficulty with broad thinking problems. Group size reached 14.3%, with 98.2% classified as impaired.

Stage 3 put the online battery side by side with an in-person clinic assessment of 31 people. Results from the two approaches matched closely for overall cognitive status. Of the 16 people the clinic tests labeled as cognitively impaired, the online battery identified 75%. Across all participants, the online battery classified 20 of 31 as impaired.

Authors conclude that a common MS subtype exists with minimal motor disability yet substantial cognitive problems, revealed only with detailed cognitive profiling and that a fully automated, large-scale online assessment is feasible for this selective cognitive subtype that standard motor-focused evaluations miss.

Vibration-powered chip could revolutionize assisted reproductive technology

In the quest to address infertility, Cornell researchers have developed a groundbreaking device that could simplify and automate oocyte cumulus removal, a critical step in assisted reproductive technologies.

Their vibration-powered chip not only simplifies a complex procedure but also extends it to areas of the world lacking skilled embryologists or well-funded labs, reducing overall costs. This offers hope to millions of couples struggling with infertility—and makes fertility treatments more accessible worldwide.

“This platform is a potential game-changer,” said Alireza Abbaspourrad, associate professor of food chemistry and ingredient technology in food science in the College of Agriculture and Life Sciences (CALS). “It reduces the need for skilled technicians, minimizes contamination risks and ensures consistent results—all while being portable and cost-effective.”

He is co-author of the study “On-Chip Oocyte Cumulus Removal using Vibration Induced Flow,” published Sept. 5 in the journal Lab on a Chip.

Doctors treating infertility need to do a critical step: gently separate protective cumulus cells from oocytes, the developing egg cells. The process, known as cumulus removal (CR), is essential for evaluating oocyte maturity before spermatozoon injection, or ensuring successful fertilization after insemination in vitro fertilization (IVF).

Traditionally, CR relies on manual pipetting: by flushing the single oocyte repeatedly with a micropipette, cumulus cells are detached from the oocyte. However, the technique demands precision, expertise and significant time. Errors can lead to damaged oocytes or failed fertilization, making the procedure a delicate and labor-intensive task.

The team’s innovation: a disposable, open-surface chip that uses vibrations, which they call vibration-induced flow, to automate CR. The chip features a spiral array of micropillars that create a whirling flow when vibrated, separating smaller cumulus cells from larger oocytes.

“The process is fast, efficient, noninvasive and more consistent, while reducing manual labor and preserving embryo development outcomes,” said Amirhossein Favakeh, a doctoral candidate in Abbaspourrad’s lab and a co-author of the study. “The oocytes remain safely in the loading chamber, while the cumulus cells are swept into an adjacent collection well.”

The researchers tested the device on mouse oocytes, which share genetic similarities with human eggs. They optimized the system by adjusting vibration power, exposure time and enzyme concentration. They found that the platform could denude up to 23 oocytes simultaneously without any loss or damage. Even freeze-thawed oocytes, which are typically more fragile, were successfully processed.

To ensure the safety of the technique, the team compared fertilization and embryo development rates between oocytes denuded manually and those treated with vibration-induced flow. The results were nearly identical: fertilization rates were 90.7% for manual pipetting and 93.1% for vibration-induced flow, while the rate of formation of blastocysts, balls of cells formed early in a pregnancy, were 50.0% and 43.1%, respectively.

“This shows that our method doesn’t compromise the developmental potential of the oocytes,” Abbaspourrad said.

The implications of this technology extend far beyond fertility clinics. The chip’s ability to separate particles of different sizes could be applied to other biomedical fields, such as cancer cell isolation or microfluidic research. Its low cost and ease of use make it particularly appealing for regions with limited access to advanced medical facilities.

Favakeh said this approach has the potential to democratize access to fertility treatment by reducing the reliance on expensive equipment and highly trained embryologists, which might allow these procedures to be brought to underserved areas.

“Ordinarily, the whole process is costly and delicate; clinics invest a lot of time in training and it is very dependent on human resources,” Abbaspourrad said. “With this, you don’t need a highly trained human to do it. And what is really important is there is almost no chance of damaging or losing the cell.”

The team plans to expand their research to include human oocytes and explore applications in intracytoplasmic sperm injection, in which CR is performed prior to fertilization. They also aim to refine the chip’s design for broader use in cell manipulation and sorting.

For now, the Cornell scientists are celebrating a major step forward in assisted reproductive technologies, they said.

This is a small device with a big impact, Abbaspourrad said.

“Replacing tedious manual methods with a simple vibration-based chip improves the speed, safety and consistency of oocyte preparation,” he said, “making fertility treatments more accessible and reliable.”