Science

Researchers Discover an Uncanny Valley of the Mind

Flickr / Internet Archive Book Images

The “Uncanny Valley” is one of the most successful viral ideas in the history of tech, right up there with Moore’s Law and Elon Musk’s whole personality. A simple idea with smart branding can be incredibly powerful.

In this case, the idea is that, as physical and digital representations of humans become more human-like, there is a point at which they actually become less trustworthy in the eyes of users, and remain that way until they are developed to full or near-full levels of realism.

The breadth of that gap is a real worry for humanoid robotics developers, but research published this week in the journal Science argues that it could be far narrower than a second challenge: the Uncanny Valley of the mind.

In the study, participants were put into a virtual environment the team built in the game engine Unity, a fictional chat program designed to make participants “more susceptible to… deceptive instructions.” These instructions were simple: Listen to, and provide feedback on, the opening instructional messages of an allegedly upcoming VR chat program.

Some of the participants were told they were speaking to an avatar controlled by a human, either one having a real, improvised conversation or one delivering a series of pre-written conversational lines. Others were told that the avatar was controlled by an A.I., either an autonomous intelligence picking its own lines, or a more traditional, fully scripted chat-bot. The team has described people without agency as “unfeeling humans,” and robots with agency as “philosophical zombies.”

The study found that people tended to react more negatively to an A.I. if they believed that it was autonomous, making its own decisions based on its own natural reactions to the user’s own statements. Partners that were presented as human-controlled, were scored as far less “eerie,” though their behavior was identical to an allegedly A.I.-controlled chat partner.

One fascinating wrinkle came out of the team’s decision to ask similar questions about human operators. They found that not only do robots seem creepier when they are believed to be smarter and have more agency, but that human beings seem creepier when they are believed to have less agency as they follow a static script; the team may have inadvertently stumbled on yet another reason telemarketing calls are so infuriating.

This paper also lays out some of the more likely or widely held ideas that could explain their findings, and they describe one major explanation as the “perceived threat to human distinctiveness.” The paper claims that “many cultures regard emotional experience as intrinsically human privilege,” and argues that part of the observed increase in creepiness could be at least partially due to participants feeling a kind of implicit threat in the robot’s very existence.

A third-person shot of the VR scenario participants saw in this study.

Basically all research into the Uncanny Valley is motivated by a desire to bridge it — that is, to figure out how to design robots so that people accept them throughout the entirety of their development. If new advancements in robot tech actually leads to worse adoption and sales, then the research simply won’t get done, and the robots will not progress.

According to this study, that insight will most likely come from research that varies both an A.I.’s physical and mental attributes, unlike this study, which kept their appearance static. Could there be a way of designing a robot’s body and face that makes a higher perceived level of intelligence more palatable? A robot with a monocle, perhaps?

Actually, earlier research provides some guidance on that point. The core finding has been that, so long as we’re not talking about true consciousness, it’s not so much intelligence that’s creepy but a perceived contradiction between physical and mental attributes. Thus, dumb lifelike humans and smart mannequins are both creepy in their own way.

In general, people feel that robots should act like robots and humans like humans, but as robots begin to take on some human traits and not others, designers will have to find the most associated visual and behavioral signifiers. The challenge, going forward, will be to create robots that don’t just look right, but which feel right, too.

Related Tags