Health

Neuroscientists Reconstructed Snippets of a Pink Floyd Song From a Person’s Thoughts

Rock on, neurons.

by Miriam Fauzia
Updated: 
Originally Published: 
Brain model with headphones.
David Crockett/Moment/Getty Images

Ever had a catchy melody on repeat in your head and wish you could have something read your mind and play the tune out loud in real life? That may sound like the premise of a Black Mirror episode, but it’s an ability that may be coming to a brain-computer interface near you in the next future.

In a paper published Tuesday in the journal PLOS Biology, scientists at the University of California, Berkeley, managed to reconstruct snippets of Pink Floyd’s “Another Brick Wall, Part I” from brain activity by analyzing the brains of 29 individuals listening to the iconic rock song. Their analysis revealed one particular region of the brain — the superior temporal gyrus — seems to be especially attuned to rhythm perception.

“It’s a beautiful case study of how to reveal these [neural] building blocks, look at how the different pieces of music are encoded in the brain, and showing that there really are specializations for music,” Barbara Shinn-Cunningham, director of Carnegie Mellon University’s Neuroscience Institute, who was not involved in the study, tells Inverse.

Decoding a jam session

When we hear music, a multitude of processes in our noggins happen at once. Sound vibrations travel from the eardrum to the cochlea, a fluid-filled, spiral-shaped structure. The cochlea converts the vibrations into electrical signals that travel along the auditory nerve to various regions of the brain, which engage with and store that information. For example, the auditory cortex recognizes musical elements such as pitch, rhythm, and melodies.

In the new study, the researchers wanted to see which parts of the brain were more active when listening for three specific musical elements: chords (with at least three playing together), harmony (the collective composite of individual musical voices), and rhythm (the pattern of sound, silence, and emphasis).

To do this, Ludovic Bellier, the paper’s first author and a computational research scientist at UC Berkeley, analyzed data from 29 individuals who, between 2008 and 2015, had their brain activity recorded while listening to Pink Floyd’s “Another Brick Wall, Part I.” The brain activity was captured with electrodes that covered the participants’ heads, which is not invasive.

Bellier had to go through data amassed from nearly 2,400 electrodes, he tells Inverse. To help with the herculean task, he and his colleagues used something called a regression-based decoding model where the computer predicts where the electrical activity is associated with which part of the brain.

This led them to whittle the thousands of electrodes down to 347 specifically related to music. These electrodes involved three regions of the brain: the superior temporal gyrus located in the temporal lobes on the sides of the brain, the sensory-motor cortex, and the inferior frontal gyrus in the frontal lobes. Seeing this wasn’t entirely surprising since these regions have varying roles in processing music information (the inferior frontal gyrus also contains Broca’s area responsible for language comprehension and production). But what was surprising, says Bellier, was to see that the right side of the superior temporal gyrus was key to encoding information about rhythm, in this case, guitar rhythm.

When the researchers recreated the Pink Floyd song from the brain activity, not including data from the right superior temporal gyrus electrodes prevented them from being able to accurately reconstruct the song. This indicated how important this region was to music perception.

Melody in the hardware

As brain-computer interfaces emerge as the newest frontier in helping paralyzed individuals walk again and those with neurodegenerative or muscular diseases speak again, Bellier and Shinn-Cunningham say these findings will lend to a more productive back-and-forth interaction between the brain and the software tasked with interpreting its electrical signals; for example, enabling a better understanding of how music influences speech.

“A lot of computer interfaces focus on what a person is trying to do with their body because you can hook that up to a prosthetic limb or a wheelchair,” says Shinn-Cunningham.

“[But] if you have a brain-computer interface, it needs to be bi-directional, not just about control — taking signals that can help a person move around in the world or a substitute for their mouths to speak. [It needs] to take information in. Understanding how information is encoded and the key elements of that encoding are really helpful for building devices that can interface with the brain.”

Bellier says the findings may shed additional light on neurological conditions like aphasia, where a person has difficulty speaking or understanding speech due to damage in regions of the brain responsible for language. He notes some cases where a patient of his co-author Robert Knight, a neurologist at UC Berkeley, couldn’t say sentences like “Happy birthday to you” or “Can I have a sausage” but was able to say the sentences when singing them. The underlying neural mechanism of why this seems to work isn’t entirely clear. However, singing therapy has been recommended for individuals with aphasia.

There are some limitations to the study that Bellier and his colleagues hope to finesse with further research, such as having data on songs lasting more than three minutes and with varying musical elements that don’t necessarily repeat in a song. With companies like Google working with AI to generate music from brain activity, further research into the neuroscience of music perception may very well change how we make music in the future.

This article was originally published on

Related Tags