Robots are already saving human lives, but we still don’t have to like them. Despite robots’ best efforts to please us, a lot of humans are still weirded out by lifelike artificial beings, though it’s unclear why we’re so unsettled by them.
No, a team of neuroscientists in Europe believes that there’s a circuit in the brain that drives our descent into the eerie sensation coined the “uncanny valley.”
It seems logical that humans would tend to like a robot that looks more like themselves, but as early as 1970, we’ve known that this isn’t the case. In the 1970s, robotics professor Masahiro Mori suggested that humans tend to generally like humanlike robots — until they reach a certain threshold.
At that point, we’re just creeped out by them until they become so lifelike that they actually re-earn our affection. Mori coined the name for this unsettling middle ground when a robot is too human for its own good, but not human enough not to be weird. It’s the uncanny valley.
In a study published Monday in The Journal of Neuroscience Fabian Grabenhorst, Ph.D., a neuroscientist at the University of Cambridge, and Astrid Rosenthal-von der Pütten, Ph.D., of Germany’s Aachen University, suggest that the Uncanny Valley has its deepest roots in the brain’s reward systems.
“Human reactions toward artificial agents involve distinct activity patterns in the brain’s valuation system,” Grabenhorst tells Inverse. “Our valuation system seems to be particularly sensitive to highly human-like artificial agents, and respond to these agents with a reduced value signal.”
The Brain’s Uncanny Valley
In this experiment, Grabenhorst and Rosenthal-von der Pütten had 21 subjects evaluate photos of four different types of humans, robots, and hybrids somewhere between the two. After looking at images of each, the participants had to judge how much they liked each being, as well as rate how familiar they found them, and how humanlike they were.
"Human reactions toward artificial agents involve distinct activity patterns in the brain’s valuation system."
The most striking pattern from their experiments was seen in the ventromedial prefrontal cortex (VMPFC), where Grabenhorst and Rosenthal-von der Pütten found that the brain’s activity patterns followed their own uncanny valley. That area of the brain was more active in response to human, or humanlike images, but was less active in response to images of artificial beings that looked like humans.
“The VMPFC responded to artificial agents precisely in the manner predicted by the Uncanny Valley hypothesis, with stronger responses to more human-like agents, but then showing a dip in activity for the most humanlike artificial agents — the characteristic ‘valley’,” Grabenhorst explains.
When combined with the behavioral data showing unflattering ratings of human-like robots, the team suggests that this activity represents a “valuation system,” in the brain. That system, they explain, is finely tuned to help us distinguish between human-like agents and non-human like agents and make judgements about them.
And interestingly, the more an artificial agent seems to look like a human, the more that internal value system tends to “signaling a lower reward value for artificial social partners that have these features,” Grabenhorst adds.
Why Some People Like Robots Less Than Others
The idea that there’s a pattern in the brain that mimics the idea of the uncanny valley suggests that there’s a solid explanation behind the theory that Mori posited over 40 years ago. But as these authors note, they believe their findings can do more than just confirm the uncanny valley’s existence. They hope that they can figure out why some are intrigued by the idea of a robot companion, while others finds the idea unpalatable.
Some People Tend to Have a Deeper Uncanny Valley
The reason for that, according to these results, may come down to the brain’s internal value system. Rosenthal-von der Pütten notes that this is the first study to show that the “the UV effect” in the brain is more pronounced for some and less for others. In other words, some people tend to have a deeper uncanny valley, suggesting that there’s some additional reason they balk in the face of robotic life.
Would You Accept a Gift From a Creepy Robot?
Grabenhorst points to a followup experiment that could get at the reasons for that increase in distaste. In that portion of the study, subjects also had to indicate whether they would accept a gift given to them by one of these social artificial agents. There they noted that the amygdala, an area of the brain involved in emotions, fear, and other deep-seated responses, played a bigger role in he brain responses of those who tended to reject gifts from artificial agents.
That, says Rosenthal-von der Pütten, suggests that robots are triggering different types of responses in different people.
“It is useful to understand where this repulsive effect may be generated and take into account that part of the future users might dislike very humanlike robots,” she says. “To me that underlines that there is no ‘one robot that fits all users’ because some users might actually like robots that give other people goosebumps or chills.”
Any robot trying to win that person’s love may be facing a more uphill battle, rooted in the brain.
Lingering Mysteries of the Uncanny Valley
When Mori first proposed the idea of the Uncanny valley, he did so from a design perspective: that one day we would have to grapple with how humans actually feel about our advanced creations. Since then, his concept has taken on the weight of a scientific theory. In an interview with IEEE Spectrum in 2012, he explained that the brain-based explanations shed light on his idea, but they still leave some questions unanswered:
I do appreciate the fact that research is being conducted in this area, but from my point of view, I think that the brain waves act that way because we feel eerie. It still doesn’t explain why we feel eerie to begin with. The uncanny valley relates to various disciplines, including philosophy, psychology, and design, and that is why I think it has generated so much interest.
Although Mori wasn’t talking about this study, he still makes a relevant point. We still don’t know exactly why our brains seem to be wired to make negative judgements about very human-like robots.
But this study does at least pin down where in the brain that eeriness may come from, paving the way for a robot that doesn’t send our brains whirling in the future. But for now, some robots just seem to be human-like for their own good.
Abstract: Artificial agents are becoming prevalent across human life domains. However, the neural mechanisms underlying human responses to these new, artificial social partners remain unclear. The Uncanny-Valley (UV) hypothesis predicts that humans prefer anthropomorphic agents but reject them if they become too human-like—the so-called UV reaction. Using functional MRI, we investigated neural activity when subjects evaluated artificial agents and made decisions about them. Across two experimental tasks, the ventromedial prefrontal cortex (VMPFC) encoded an explicit representation of subjects’ UV reactions. Specifically, VMPFC signaled the subjective likability of artificial agents as a nonlinear function of human-likeness, with selective low likability for highly humanlike agents. In exploratory across-subject analyses, these effects explained individual differences in psychophysical evaluations and preference choices. Functionally connected areas encoded critical inputs for these signals: the temporo-parietal junction encoded a linear human-likeness continuum, whereas nonlinear representations of human-likeness in dorsomedial prefrontal cortex (DMPFC) and fusiform gyrus emphasized a human-nonhuman distinction. Following principles of multisensory integration, multiplicative combination of these signals reconstructed VMPFC’s valuation function. During decision-making, separate signals in VMPFC and DMPFC encoded subjects’ decision variable for choices involving humans or artificial agents, respectively. A distinct amygdala signal predicted rejection of artificial agents. Our data suggest that human reactions toward artificial agents are governed by a neural mechanism that generates a selective, nonlinear valuation in response to a specific feature combination (human-likeness in nonhuman agents). Thus, a basic principle known from sensory coding—neural feature selectivity from linear-nonlinear transformation—may also underlie human responses to artificial social partners.