The 'Uncanny Valley' Theory of Robot Faces Holds Water
We want to build an artificial intelligence in our own image, but that might be a bad idea.
Do humanoid robots make you feel uneasy? Well, you’re not alone.
Way back in 1970, Masahiro Mori proposed a theory that on the spectrum from machine to human there is an “uncanny valley,” where humanoids confuse the boundaries between t and them, and we respond with revulsion. (This may apply to sound as well.) The theory caught on because it made intuitive sense. Even Hiroshi Ishiguro’s highly realistic bots have a tinge of not-quite-humanness that can be more than a little unsettling:
And so conventional wisdom has been that if you want your robots to be likable, keep them mostly in the realm of machine. This is the theory that has born such adorable creatures as Pixar’s WALL-E and Aldebaran’s Pepper.
The problem, for science, is that the theory has been proven anecdotally through pop culture, but not, you know, actually proven. New research out of Stanford attempting to change that may actually change a great deal more about our understanding of face interfaces.
In a study set to be published in Cognition next year, but available online now, researchers show that the problem with previous Uncanny Valley testing had to do with the methods employed. The problem wasn’t necessarily that the robots appeared too human or not human enough, but that they were human-robot hybrids. They existed on a sliding scale that was little understood.
So, instead of creating new visuals, the researchers used real-world examples of actual robots that have been designed and built. Eighty of them.
They had people rate the faces on their machine-ness, their humanness, and likability. They found that you actually can distribute the faces along a machine-to-human axis. What’s more, they found a dip in likability somewhere in the zone where machine becomes human. In other words, there is an Uncanny Valley, it just doesn’t necessarily represent what we thought it did.
They repeated the experiment in a few different ways, including using a trust game where participants had to make a monetary bet on how much of that money a particular robot would return to them. What they found, in general, supported the idea that we don’t trust hybrids. That said, the experiments are not perfect. They dealt only in images of robots, not three-dimensional beings that can actually interact. The robots with the most human-like faces did very well in terms of likability and trust, but would the illusion be shattered if we saw them attempt to walk and talk?
The researchers were self-critical enough to make exactly that point. These almost-human bots “may occupy a precarious position at which small faults in their humanness might send the social interaction tumbling.”