In an emergency situation, humans trust the directions of robots even after they’ve ceased to be credible. That is the unnerving conclusion drawn by Georgia Tech experimenters who put test subjects in a high pressure situation with a robot that had previously led them astray. They followed the A.I. leader despite a credibility gap.
“People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault,” Alan Wagner, a senior research engineer in the Georgia Tech Research Institute told the Georgia Tech News Center. “In our studies, test subjects followed the robot’s directions even to the point where it might have put them in danger had this been a real emergency.”
The roboticists in charge of the experiment told volunteers to follow a “brightly colored” robot, labeled “Emergency Guide Robot,” to a conference room to fill out a survey. The robot then brought the participants to the wrong room, took them in circles, or even broke down altogether. Nevertheless, once the researchers sent smoke into the test site and set off an alarm, each of the 42 test subjects followed the LED-lit robot’s white arms to an entrance opposite from where they had entered.
“We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn’t follow it during the simulated emergency,” Paul Robinette, a GTRI research engineer told the Georgia Tech News Center. “Instead, all of the volunteers followed the robot’s instructions, no matter how well it had performed previously. We absolutely didn’t expect this.”
The volunteers even followed the robot’s instructions after it made “obvious” errors during the emergency, like pointing them to a door blocked by heavy furniture. The study leaders say the study helped them better understand the degree to which people naturally trust robots to help them in times of emergency. The issue will become more relevant as we hand out greater shares of our lives to robot control, including possibly driving us in autonomous cars or handling our food.
“Would people trust a hamburger-making robot to provide them with food?” Wagner asked, hauntingly.
The research certainly suggests fewer barriers to trusting robot assistants than some would have thought. But robots may not yet truly deserve that level of trust.Photos via Rob Felt; Georgia Tech