Robots Need Our Quantified Trust and They Don't Have It

The future of automation depends on our ability to manipulate ourselves.

Getty Images / Jeff J Mitchell

Right now, DARPA is developing robots capable of rescuing humans from burning buildings even if that means knocking down walls and wading through flames. These potential savior-bots are evaluated on an obstacle course designed to test their speed, stability, versatility, and more. They’re getting better, stronger, and faster. But what will panicked people think when a machine crashes into their smoke-filled living room? Will these robots alleviate or compound the terror? Understanding the correlation between effectiveness and trust is profoundly difficult when it comes to robot-human interaction.

Trust wasn’t critical to understanding robotics and automation until robots became proficient enough at complicated tasks to transition from being implements to being intelligences. Today, robots work in an astonishing array of industries and in a growing number of service capacities. And whether a robot is helping an engineer prototype of flying car, an elderly person get into bed, or an arson victim escape a building, the interaction is almost as important as core function. A robot that doesn’t inspire trust is a robot that can’t help anyone, which is why roboticists are working overtime (with robots) to come up with a way to measure trust — a critical step towards being able to study and promote it.

“We humans use trust in all our interactions with the world and that’s true with even regular ‘machines,’” says University of Central Florida researcher Florian Jentsch. “What is unique and different here is that we believe that artificial intelligence and robotics development is at the cusp of… moving away from something that’s really a tool, and toward something that’s a teammate, or a coworker.”

The development of more trustworthy robots (or more trustworthy-seeming robot, there often isn’t a difference), requires specific expertise. Traditionally, robots have been designed by people who are intimately familiar with what that robot can do and what it cannot do, but not what it looks like it’s doing or how it seems to be going about any given task. Impression management is a new thing for engineers and an increasingly important part of the field as all-purpose robots become more feasible. “A robot can move,” Jentsch points out. “It can hit you; it can drive into you; it can do bad things.” He adds that this mobility and flexibility is why trust is more important in robotics than in industrial design, a field that has long informed automaton construction.

The design goal becomes communicating the robot’s purpose and, through its behavior, it’s competence. That requires engineering a form of emotional shorthand, which is no easy task.

All the way back in 1998, the U.S. Air Force commissioned one of the more forward-thinking studies on this subject, hoping to figure out how to integrate automated systems and eventually robots into the military without jeopardizing troop dynamics. The analysis found that people react to robots and each other in the same way, meaning that human-robot trust would be built on the same principles as human-human trust. In so much as trust is a multidimensional concept, it is also a fairly consistent phenomenon. It is always won and lost the same way — even when motivation is removed from the picture.

The messaging here might be a bit on the nose.

Classically, trust has been evaluated along two numerical scales: motive and competence. Evil and competent doesn’t inspire trust and neither does benevolent and incompetent. Benevolent and competent does. It’s more or less a quadrant system. But it’s one of many and the others are far more complicated. One proposed system uses a 40-point scale, each evaluated between 0 and 100 percent. This questionnaire seems to get better, more predictive results by incorporating more subjective evaluations, like how likely a participant is to think a robot is honest or friendly.

More complex ratings systems might provide better predictions of real-world human interaction before a robot’s release, but not everyone thinks that user ratings are the right way to go. Over at the University of British Columbia, AJung Moon heads up a lab called the Open Roboethics Initiative, which is dedicated to studying human-robot interaction, and the hard ethical choices that arise from it. “I don’t necessarily think questionnaires are the way to go,” she told Inverse. “I think much more of an in-situ experiment is necessary.”

To make the point that it’s very difficult to predict how people will react to a robot until you’ve seen them actually do so, multiple sources referenced a recent study on robot trust. In this experiment, well-educated people would follow a robot toward an exit after an alarm sounded — even if that robot had been lost in the facility minutes earlier. If people will follow a demonstrably untrustworthy robot when they believe their safety is on the line, the argument goes, then we can’t take almost any sort of interaction for granted when planning a robot’s release.

“A very careful scientific analysis needs to be undertaken in order for us to begin to comprehend what it is that a relationship between a human and an intelligent machine will be like,” Dartmouth tech commentator and theoretical physicist Marcelo Gleiser told Inverse via email “Learning by doing may be a very dangerous game to play.”

“We should definitely try to come up with a standardized metric so we can see how one particular platform does before a problem arises during real-world use,” Moon added. “Robotics is such a fast-paced field that we’ve started to adopt these technologies before we can even stop and think about it.”

She mentioned the so-called “Wizard of Oz” approach to demonstrating a robot’s abilities — having a human serve as a puppeteer — potentially creates dangerous misunderstandings about robot abilities. This method of refining interaction could, she says, lead to unexpected consequences including, but not limited to, humans trusting the wrong robots at the wrong times.

Still, there needs to be a system and there need to be a scale so there can be standards and regulations. If a robot’s overall trustworthiness can be put on a linear or even multi-dimensional scale and directly compared to previously released robots, it will be possible to say that a robot need to be at least this trustworthy to be an in-home elder assistant. Or it could lead to a mandate that more trustworthy robots be capable of more tasks and incur more legal liability for their owners and operators.

Ultimately, though, Jentsch argues that all such speculation is useless until society comes to a more concrete understanding of what powers it wants robots to have. Right now the only real discussion of the importance of human trust in robots is going on with respect to self-driving cars. “When you get beyond that,” he says, “there’s really no standard for what a robot should or should not do.”

And so, in absence of real public or governmental direction, roboticists are slowly attempting to find their own way forward. A patchwork of civilian and military research projects are building the first-ever understanding of the social dynamics between man and machine, but with no clear idea of what to do with that understanding once they’ve got it. This means that even the most trustworthy scholars are, in a sense, stalling for time.

Related Tags