When we understand someone, our brain actually mimics what the other person is thinking or feeling. It’s called mirroring, and it’s usually invisible (because it’s just happening in your brain). But a new piece of wearable technology may bring us one step closer to actually observing what someone’s brain looks like when they understand what you’re telling them.
In a study published Monday in the journal Scientific Reports, biomedical engineers at Drexel University and psychologists at Princeton University report that they’ve used an easily wearable headband-style brain imaging tool to allow them to see how well people’s brain activity mirrors one another during communication, a phenomenon they call brain-to-brain coupling.
This device, which uses functional near-infrared spectroscopy — fNIRS — measures where oxygenated hemoglobin is concentrated in the brain. In this way, it measures which areas of the brain are more active in a given moment. Since the fNIRS machine is a headband, it’s much more practical than the typical fMRI machine, a massive device that requires a subject to lie perfectly still for long periods of time.
With fNIRS headbands on, subjects listened to recordings of stories that were told by subjects who recorded the stories while wearing the headbands. This way the researchers could map how certain activities in each subject’s brain matched up with particular moments in the story. By examining the degree to which the listeners’ brain activities mirrored those of the storytellers, researchers could approximate how well the listener understood what they heard.
They found that higher levels of mirroring — when the speakers’ and listeners’ brain activity synced up the most — were associated with better understanding, and that mirroring happened with a short delay. In other words, when a listener understood a speaker, their brain imitated the speaker’s after a few seconds. But this only happened when the listener understood.
“We observed significant speaker-listener temporal coupling only during successful verbal communication,” write the researchers. “When communication was blocked (i.e., during listening to a foreign, incomprehensible language), the synchronization was lost.”
These results are promising and fascinating, but this study was limited by a couple of factors.
First, the number of subjects was relatively low: three speakers and 15 listeners. Second, when the researchers used fMRI to validate their fNIRS results, they didn’t use the same subjects.
Therefore, the researchers say that future studies should use both fNIRS and fMRI simultaneously to get a fuller picture of how well a listener’s brain mirrors a speaker’s.
Photos via Liu, Piazza, Simony, Shewokis, Onaral, Hassan, and Ayaz
Abstract: The present study investigates brain-to-brain coupling, defined as inter-subject correlations in the hemodynamic response, during natural verbal communication. We used functional near-infrared spectroscopy (fNIRS) to record brain activity of 3 speakers telling stories and 15 listeners comprehending audio recordings of these stories. Listeners’ brain activity was significantly correlated with speakers’ with a delay. This between-brain correlation disappeared when verbal communication failed. We further compared the fNIRS and functional Magnetic Resonance Imaging (fMRI) recordings of listeners comprehending the same story and found a significant relationship between the fNIRS oxygenated-hemoglobin concentration changes and the fMRI BOLD in brain areas associated with speech comprehension. This correlation between fNIRS and fMRI was only present when data from the same story were compared between the two modalities and vanished when data from different stories were compared; this cross-modality consistency further highlights the reliability of the spatiotemporal brain activation pattern as a measure of story comprehension. Our findings suggest that fNIRS can be used for investigating brain-to-brain coupling during verbal communication in natural settings.