Pink Floyd song reconstruction from brain waves adds another brick to the future of voice and hearing technology

Pink Floyd song reconstruction from brain waves adds another brick to the future of voice and hearing technology

The melody is slowed and stretched, as though it were being dragged down a celestial black hole. The vocals are tinny, almost robotic, as if they’re coming through a radio station in another dimension.

But the song is, unquestionably, Pink Floyd’s 1979 ‘Another Brick In The Wall, Part 1.’ It’s the first audio ever to be reconstructed from a person’s brain activity as they listened to a song, a feat that marks a major milestone in scientists’ ability to decode and understand how the brain perceives music. It could eventually make assistive voice technologies and speech prosthetics better at replicating how humans talk by improving their emotionality and prosody, a term for the rhythm of speech.

“Currently, it’s difficult for voice assistants on phones, for example, to find the speaker’s most likely prosody, because it’s not something that the patient controls,” neuroscientist Ludovic Bellier, Ph.D., who decoded the data during his post-doc at the University of California, Berkeley, told Fierce Biotech Research in an interview. “You can’t really replicate all the different tones you could produce [naturally].”

In a study published Aug. 15 in PLOS Biology, Bellier and his team, led by Berkeley’s Robert Knight, M.D., described how they decoded the brain activity of 29 patients who passively listened to ‘Another Brick In The Wall’ while undergoing awake neurosurgery for epilepsy. That data had been collected between 2008 and 2015 by another research team that previously used it to answer different questions about how the brain perceived music, but had not tried to reconstruct the song.

“They tried to correlate some aspects of the song—the acoustics etc—with the neural activity, but they weren’t using encoding and decoding models,” Bellier explained.

Why ‘Another Brick In The Wall’? “It’s because they love Pink Floyd,” he said. “They thought it would be cool to have patients listening to Pink Floyd.”

Of course, there were other reasons too, he pointed out: The first half of the song is made of vocals and instrumentals, while the second half is instrumental only. On top of that, “It’s a sweet spot of familiarity,” Bellier added.

“Part 1 [of Pink Floyd’s three-part ‘The Wall’ rock opera] is more of the introduction song, so Pink Floyd fans will know it, but non-Pink Floyd fans may just think, ‘Oh, it sounds familiar,’” he said. “And it’s not hard metal—it’s not overwhelming.”

The researchers who collected the data didn’t ask the patients if they knew the song, nor whether they were musicians themselves—factors that could, in theory, affect how their brains perceived the music, Bellier noted.

“That would have been nice to know if it was one way or the other,” he said. “Sometimes if you are highly trained at something, your brain uses less energy—it’s optimized, so it’ll be harder to decode because it’s way more sophisticated.”

But those details weren’t necessary for the Berkeley team to see if they could reconstruct the song from the subjects’ neural signals. They took the recordings from the 347 different electrodes placed on the patients’ brains and used artificial intelligence to figure out what elements of the song the signals represented. They then reconstructed them into a portion of the song.

A few things stood out: Signals from electrodes that picked up sound onset or rhythm were integral to being able to construct the song, suggesting that those elements were equally vital to music perception. And the researchers discovered that a specific region in the right superior temporal gyrus was uniquely responsible for perceiving rhythm.

The development is one of several in recent years that could eventually make voice prosthetics and brain-computer interfaces sound more humanlike. Beyond voice assistants, Bellier thinks the findings could also be used to build better devices for people with hearing impairments that can’t be easily fixed with hearing aids. The application would rely on external microcortical stimulation—technology that doesn’t exist just yet, though researchers are working on it. But when it does, could ultimately make it so such patients are able to perceive a wider range of sounds.

“If we understand exactly in which regions of the brain the musical and speech information is represented and how it’s represented, then we could stimulate super complex patterns to restore [that perception],” Bellier said. “That could be some kind of brain-computer interface—not to read from the brain, but to write into it.”

Share:
error: Content is protected !!