Innovation

A.I. displays an unsettling skill: the ability to show empathy

Donald Iain Smith/Photodisc/Getty Images

Empathy enables couples to understand and predict the actions of each other. It's long been the domain of primates, but a new breakthrough has revealed how scientists are teaching A.I. to act a little like your girlfriend.

Using only visual data and no pre-programmed logic, researchers trained an A.I. to accurately predict the final action of a secondary robot "actor." At its peak, the A.I. could accurately predict the robot actor's final action (after only seeing its starting point) with 98.5 percent accuracy across four movement patterns.

Why it matters — Predicting the movement of a toy robot may seem rudimentary, but the researchers suggest it may actually be evidence of the A.I. having "theory of mind," a cognitive trait present in most humans and primates that facilitates social communication ranging from play hiding to lying and deception.

Discovering this trait in robots could help researchers better understand its evolutionary underpinnings in humans as well and help develop more life-like robots in the future.

The paper was published Monday in the journal Scientific Reports.

See also: Service robot 'Floka' shows emotion while helping around the house

The big idea — Training an A.I. to predict the movement or pattern of something may be an impressive trick, but it's not necessarily revolutionary. What sets their approach apart, write the authors, is that they've developed an A.I. that doesn't just predict the next frame of an action (e.g. if one video frame shows a soccer ball being kicked it will predict the next frame will show the ball in the air) but instead to predict the final frame of a sequence of actions (e.g. the opposing goalie intercepting the shot at the last moment.)

"This is akin to asking a person to predict 'how the movie will end' based on the opening scene," explain the authors in the paper.

Using visual information alone to make these predictions, the authors explain that successful prediction of the robot actor's movement can then represent a primitive understanding of its goals — a kind of social cognition that is similar to empathy in humans.

Hod Lipson is a co-author on the study and a professor of robotics and innovation at Columbia University. In a statement, he explains that relying on visual data alone is another way to bring this A.I. closer to human cognition.

"We humans also think visually sometimes," explains Lipson. "We frequently imagine the future in our mind's eyes."

To predict the motion of the robot 'actor,' the A.I. 'observer' only needs to see the first moment of the set-up.

Creative Machines Lab/Columbia Engineering

How it works — To see how well their A.I. could actually relate to its robot counterpart, the team developed an experiment.

In a playpen roughly 3 feet by 2 feet, the researcher let loose a tiny robot that was programmed to chase green dots on the ground (its "food.") Overhead, the A.I. collected birds-eye-view data of what the robot was doing without having prior knowledge of its goals or of what the green dots represent.

In preparation for guessing the robot's next move, the A.I. was trained on videos of this robot completing its path to its food, which include four different types of movements:

  • Straight-line behavior (moving straight toward the food)
  • Elbow behavior (navigating towards an intermediary point before turning toward the food)
  • Zig-zag behavior (navigating two intermediary points before reaching the food)
  • Obstacle behavior (being unable to view and reach the food because of an obstructing obstacle)

A starting scenario, e.g. a placed green dot and robot in the playpen, were fed to the A.I. as its input and as output it spits out the final frame of the action — how the robot would move toward its goal. To show the movement through time, the robot's path was depicted as a smeared line.

"The ability of the observer to put itself in its partner's shoes... is perhaps a primitive form of empathy."

What they discovered — When the robot performed one or a mix of these four behaviors (which the A.I. had been previously exposed to in training) the A.I. was able to accurate 98.5 percent of the time.

Using it's 12 layers of deep learning networks, the A.I. makes a prediction about the robot's future movement.

Creative Machines Lab/Columbia Engineering

However, this accuracy took a plunge when the robot was secretly given instructions (a new way to move) that the A.I. hadn't previously been exposed to in training. The authors explain that in these cases the A.I. was failing to notice the difference between itself and the robot, i.e. lacking empathy.

Those trials aside, the researchers write that the overall success of the A.I. in these trials demonstrates that it, to some degree, can successfully embody the mental state of the robot actor.

"Our findings begin to demonstrate how robots can see the world from another robot's perspective," explains Boyuan Chen, lead author of the study and computer science Ph.D. student at Columbia, said in a statement. "The ability of the observer to put itself in its partner's shoes, so to speak, and understand, without being guided, whether its partner could or could not see the green circle from its vantage point, is perhaps a primitive form of empathy."

What's next — Of course, without being able to access the "mental" states of the A.I. or robot to ask them about this experience, it's difficult to actually say whether the A.I. truly did display empathy or whether it simply appeared to do so.

As the researchers continue exploring the limits of A.I. cognition and how they may represent elementary forms of human-like social intelligence, Lipson said it will be important to keep ethical concerns at the forefront of their minds as well.

"We recognize that robots aren't going to remain passive instruction-following machines for long," Lipson says. "Like other forms of advanced AI, we hope that policymakers can help keep this kind of technology in check, so that we can all benefit."

Abstract: Behavior modeling is an essential cognitive ability that underlies many aspects of human and animal social behavior (Watson in Psychol Rev 20:158, 1913), and an ability we would like to endow robots. Most studies of machine behavior modelling, however, rely on symbolic or selected parametric sensory inputs and built-in knowledge relevant to a given task. Here, we propose that an observer can model the behavior of an actor through visual processing alone, without any prior symbolic information and assumptions about relevant inputs. To test this hypothesis, we designed a non-verbal non-symbolic robotic experiment in which an observer must visualize future plans of an actor robot, based only on an image depicting the initial scene of the actor robot. We found that an AI-observer is able to visualize the future plans of the actor with 98.5% success across four different activities, even when the activity is not known a-priori. We hypothesize that such visual behavior modeling is an essential cognitive ability that will allow machines to understand and coordinate with surrounding agents, while sidestepping the notorious symbol grounding problem. Through a false-belief test, we suggest that this approach may be a precursor to Theory of Mind, one of the distinguishing hallmarks of primate social cognition.
Related Tags