Computers that run off human brain cells sound like something out of a science fiction novel—one that ends with a dire warning about the dangers of merging man and machine. But for scientists working in the newly dubbed field of organoid intelligence, or OI, bringing together neurons and silicon offers enormous potential for understanding and treating brain disease.
The researchers laid out the case and challenges for OI in an article published Feb. 28 in Frontiers in Science. They explained the kinds of investments needed to help the discipline reach its full potential—along with the ethical quandaries that will arise along the way.
“[The combination of OI and] artificial intelligence can really bring us a step forward to finding drugs against Alzheimer’s or helping us treat kids with developmental or learning disorders without spending millions of dollars on preclinical trials and then unsuccessful clinical trials,” first author Lena Smirnova, Ph.D., told Fierce Biotech in an interview.
In her lab at Johns Hopkins University, Smirnova is working on brain organoids—or, using her preferred terminology, micro-physiological systems, or MPS. Her goal is to turn these 3D models into viable alternatives for environmental toxicology studies. Animal models are so expensive and inaccurate in their reflection of how substances impact the brain that many chemicals around us aren’t tested at all, she said. The models are already in use for preclinical research in many labs, and the pharmaceutical industry is taking organoids seriously for use in every phase of the drug development process, as Roche executive Hans Clevers told Nature in February.
But brain MPS have some unique limitations compared to other types. MPS of any organ system should mirror the structure and function of the human counterpart, Smirnova explained.
“For example, the function of the lung is breathing, so [an MPS] of the lung would breathe,” she said. “The liver would metabolize, the kidney would excrete. We try to recapitulate those functions of the organ in humans in these small, miniature cultures.”
For the brain, that means capturing its ability to process information in the form of learning and memory. But cells in a dish by themselves can’t demonstrate learning in a meaningful way, Smirnova explained. That creates a blind spot when it comes to understanding how substances act on the brain.
“Showing that a chemical can perturb the ultimate functionality of the brain—or that it can protect cognition or protect learning—that would be the key,” she said.
Smirnova and her colleagues believe that OI could bridge that gap. It could also lift the veil on the mechanisms that underpin learning and memory, giving better insight and offering potential treatments for still-undruggable diseases like Alzheimer’s.
“Imagine if we can show a system can learn—and there’s pretty good evidence for this already—then we can compare healthy organoids to some created from cells from an Alzheimer’s patient,” the paper’s lead author Thomas Hartung, M.D., Ph.D., who is spearheading OI research at Johns Hopkins, said. “Can we then find substances that reestablish the quality of learning?”
The evidence Hartung cited is DishBrain, a closed-loop system consisting of a 2D neuronal culture linked to a computer that could send and read signals from the cells simultaneously. In October 2022, DishBrain creator Cortical Labs published a paper in Neuron detailing how it had successfully taught the neurons to play the 1970s computer game Pong. Cortical Labs founder Brett Kagan, Ph.D., was one of the authors on the new OI paper.
Smirnova’s lab is in the early stages of doing something similar with their brain MPS. Their model is simple compared to the human brain—it has neither the many characteristic layers nor its structure. Still, if a 2D model like DishBrain can learn, it’s likely that a 3D model can do even more on account of its greater complexity. Characterizing the MPS’ gene expression patterns, electrical signaling responses and more will answer that question.
“We want to know what this model can give us,” Smirnova said. “Do we have the complete molecular machinery of memory there?”
If the MPS does turn out to have all the makings of a system that can learn, it will then be time to submerge it in a learning environment by hooking it up to a machine—and, thus, establish organoid intelligence. Even then, though, that won’t be enough to meet Smirnova’s goal of truly recreating the brain’s function. Learning and memory are complex processes that involve multiple brain regions and cell types.
To go beyond short-term memory formation, MPS would likely need to be much larger, with layers and regions. It will also require immune cells, as those are critical for forming connections between neurons, along with some form of vascularization or perfusion that can help get nutrients deep into the organ.
Progress is being made on all these fronts, Smirnova said, but for MPS that can fully recapitulate the brain, it’s still early days. Using the Human Genome Project as an allegory, “we’re at the stage of understanding what nucleotides are present in DNA,” she explained.
Semantics and ethics
Technical limitations aren’t the only concerns the field will have to contend with. At the most fundamental level lies the challenge of semantics: When the Kagan paper was published, many neuroscientists pushed back on the idea that he’d established “sentience,” “goal-directed behavior” or “intelligence” in the neurons in a dish. Some laid out their concerns in a letter published March 1 in Neuron, noting that the use of such terms wasn’t just unnecessarily provocative, but also misleading.
Claiming that a cell culture embedded in a closed-loop system demonstrates sentience and intelligence might impact the public perception of what in nature is sentient and intelligent and could trigger ethical debates fueled by misunderstanding.” — Researchers in Neuron rebuttal
“The application of intelligence and sentience to neurons-in-a-dish in this paper is not based on any established or robust consensus on the definitions of these very important terms,” the researchers wrote. They also criticized the results of the study, saying they were too weak to justify the strong conclusions made.
“Overselling scientific results directly impacts the evaluation of scientific reliability and credibility,” they said in the paper. “Claiming that a cell culture embedded in a closed-loop system demonstrates sentience and intelligence might impact the public perception of what in nature is sentient and intelligent and could trigger ethical debates fueled by misunderstanding.”
For their part, Kagan and his co-authors acknowledged the scientists’ concerns about language used in a response, also published March 1 in Neuron, though they denied overselling their results. They also added that they had engaged with ethicists to understand the meaning of “sentience” in the context of neurons in a dish.
The question of what constitutes sentience is only the beginning. Most of the ethical concerns about OI have centered on the question of consciousness, which is also ill-defined. If brain organoids can establish something even remotely like it, will they also be capable of pain and suffering? Do they also have rights?
“I would say a very big challenge is to do this in a very ethical way,” Hartung said. Questions like these are why Hartung, Smirnova and their collaborators are making bioethicists a key component of their plan for advancing OI. They’ve also proposed establishing a common language, developing best practice guidelines and conducting research on the neural basis of consciousness.
Much of the new OI paper is devoted to exploring the field’s more far-flung theoretical applications, such as powering computers with brain cells to make them more efficient. These have already prompted visions of a dystopian future where the human brain is used for “purely instrumental purpose,” as one writer put it.
While OI could indeed lead to improvements in silicon-based computing, no one will be running a laptop on a brain—at least not any time soon. “That’s not going to happen,” Hartung said. But other OI advances, like the ones that could give researchers like Smirnova the ability to test drugs’ impact on the developing brain, aren’t so far off. As she pointed out, there’s still much to be done before the brain is recapitulated in its full capacity, with the ability to form long-term memories. But steps along that road will manifest as practical applications for OI, according to Hartung.
“The next step, which is a big one, is to exploit this for drug development or for toxicology,” Hartung said. The team expects that they’ll have a reproducible system to model learning within one to two years. Then, through collaborations with scientists studying autism and Alzheimer’s, they plan to build systems for studying neurodevelopmental conditions and degenerative disorders.
“We’re certainly not talking about something that your grandfather can get next year, but it is something that, by scientific standards, is to the point where it can make a practice contribution in a reasonable time,” Hartung said.
Meanwhile, he and his team welcome input from the public. Such developments can be “very scary for a general audience,” Hartung said. He wants people to have a full picture of the benefits and challenges of OI from the start.
“This is why we have been cleaning this up from the very beginning,” he said.