Listen up

Why people enjoy songs comes down to uniquely human ability — study

Complex brain communication lets us process music and lyrics.

From lullabies to dance tracks, songs make up our life's soundtrack. But how humans separate words and melodies from a single sound wave — a capability no other member of the animal kingdom possesses —has perplexed researchers for centuries.

In a new study, published in the journal Science, scientists discovered an “optimal, elegant solution” in the brain that enables humans to processes melody and speech simultaneously and efficiently. Its existence allows us to easily separate the music from the lyrics via a crucial cognitive assist.

Scientists have known that when the brain processes music, it's split: The left-brain is devoted to processing melodies while the right brain interprets speech. But, crucially, they’ve never understood why.

This study reveals that both left and right-brain systems actually work in tandem — processing the acoustic elements of music simultaneously and communicating the information across brain networks.

“With this research, we see it more as a brain network instead of dedicated regions for each domain,” Philippe Albouy, lead author on the study and researcher at the Université Laval, tells Inverse.

To drill down on this complex system, Albouy and his colleagues combined 10 original phrases with ten original melodies. They enlisted the help of a singer, recording a collection of 100 unique acapella songs, which contained acoustic information in both the temporal (speech) and spectral (melodic) domain. Then, the researchers manipulated the songs, screwing with the melodies and lyrics.

Here's the original acapella sample, which sounds something like the bard from The Witcher would compose:

Now, compare that ditty to a sample where the melody has been manipulated in a sort of minstrel-meets-Deadmau5 sort of way.

The team subsequently rounded up 27 native English speakers and 22 native French speakers to test how these manipulated songs influenced brain activity. In their first experiment, they played a pair of acapella songs — one with a tweaked melody and one with tweaked speech. They asked the group to focus on either the melody or "sentences" in the songs and measured how well the group could pick out melodies or lyrics from the garbled tracks.

They found that people had trouble recognizing sentences when the speech in songs was degraded. People also had trouble with the melody when melodies were degraded. These findings echo other research that points to the split-brain specialization — one area is good at processing melodies while the other processes speech.

A visualization of the complex split-brain music system.

Philippe Albouy

Next, the researchers stuck 15 of the native French speakers in functional magnetic resonance imaging (fMRI) machines (scanners that record blood flow activity in the brain). While they were in the brain scanners, the researchers played blocks of five acapella songs, either degraded in their melodies or speech.

"This can be considered as an elegant solution of the central nervous system."

Analyses of the brain scans revealed that the ability to decode speech depends on brain activity in the left brain, while melodies depend on the right brain regions.

Taken together, the findings suggest the brain evolved to contain specialized regions that can interpret sometimes subtle sonic frequencies, pitches, tones, and communication signals. This split-brain system allows us to process music quickly and efficiently. It’s how you can follow along with Eminem’s rapid-fire rapping — without being distracted by heavy beats.

"This can be considered as an elegant solution of the central nervous system to optimize the processing of two important communicative signals in the human brain: speech and music," Albouy says.

Next, the research team hopes to test the unique experiment with other languages and larger groups of people, Albouy says. This discovery could eventually lead to the development of new treatments: Damage to the brain because of something like a stroke can make it difficult to listen to a song. By understanding what exactly happens cognitively when the radio's turned up, scientists can better learn how to help the brain's left and right sides work in harmony.

Abstract: Does brain asymmetry for speech and music emerge from acoustical cues or from domain-specific neural networks? We selectively filtered temporal or spectral modulations in sung speech stimuli for which verbal and melodic content was crossed and balanced. Perception of speech decreased only with degradation of temporal information, whereas perception of melodies decreased only with spectral degradation. Functional magnetic resonance imaging data showed that the neural decoding of speech and melodies depends on activity patterns in left and right auditory regions, respectively. This asymmetry is supported by specific sensitivity to spectrotemporal modulation rates within each region. Finally, the effects of degradation on perception were paralleled by their effects on neural classification. Our results suggest a match between acoustical properties of communicative signals and neural specializations adapted to that purpose.
Related Tags