Strategy

A.I. is about to put a whole new spin on virtual communication

“When things go bad, technology can absorb the impact.”

Updated: 
Originally Published: 

Communication between co-workers can be challenging when we can no longer see the other person’s body language. Small slights can fester and develop into larger problems. These issues may be avoidable, however, with the use of smart replies in our texts or emails, according to research out of Cornell University. The reason? People are more likely to blame the machine rather than the other person when things go awry.

“We find that when things go wrong, people take the responsibility that would otherwise have been designated to their human partner and designate some of that to the artificial intelligence system,” said Jess Hohenstein, a doctoral student in the field of information science and the first author of the paper “AI as a Moral Crumple Zone: The Effects of Mediated AI Communication on Attribution and Trust.”

The researchers based their findings on 113 student participants’ use of the now-defunct messaging app Google Allo, which suggests replies based on its algorithm and the user’s conversation history. The students were tasked with chatting with who they thought was another study participant but was actually a researcher who was controlling the dynamics of the conversations using pre-written scripts.

Conversations were split into four categories: a positive conversation with A.I. replies, a negative one with A.I., and positive or negative without A.I. Participants were asked to assign a percentage of responsibility for the task’s outcome to themselves, their conversation partner, and the A.I. system when it was in use. They were also asked to rate their level of trust in the other person and A.I.

In successful conversations, participants’ trust of their partners was rated an average 4.8 out of 6; in successful A.I.-mediated conversations, it was 5.76. In unsuccessful conversations, participants trusted the A.I., with a score of 3.13, more than their partners, at 3.04.

“When things go bad, technology can absorb the impact.”

Hohenstein began this research five years ago, but it has gained new relevance as more people have been forced to work from home over the past few months.

“The whole paper was inspired by the moral crumble zone,” co-author Malte Jung, assistant professor of information science at Cornell University and director of the Robots in Groups lab, tells Inverse. “When things go bad, technology can absorb the impact. It allows people to project the negative things somewhere else.”

Jung put forward some reasons why this may happen.

“It could be easier to put the blame elsewhere, especially when you have to tell someone to their face that they’re wrong,” he said. “It may be our general tendency to blame technology when something goes wrong. There’s no personal cost to it. Especially when you look at conflict in groups, technology can act as a lightning rod.”

The researchers are also curious about A.I.’s potential to act as a mediator when conflict arises. So if an algorithm detects a statement that could be misconstrued, it would offer suggestions for clarification.

Jung and his team are interested in how A.I. and other technology are impacting the way we communicate. (Another experiment found that a microphone rigged to automatically turn to speakers actually facilitated conversation.) For example, with smart replies, there’s the issue of the technology’s data sets likely being based on the communication of patterns of white men.

Although this recent study puts a positive spin on our use of technology with communication, Jung said he wants people to think more about the subtle ways smart replies may be impacting us.

“When you change the smart replies you give people, you can change the way they communicate,” he said. “This tech was created to make our lives easier, but there’s also potential baggage. It changes the fundamental ways we interact with others.”

Abstract:

AI-mediated communication (AI-MC) represents a new paradigm where communication is augmented or generated by an intelligent system. As AI-MC becomes more prevalent, it is important to understand the effects that it has on human interactions and interpersonal relationships. Previous work tells us that in human interactions with intelligent systems, misattribution is common and trust is developed and handled differently than in interactions between humans. This study uses a 2 (successful vs. unsuccessful conversation) x 2 (standard vs. AI-mediated messaging app) between subjects design to explore whether AI mediation has any effects on attribution and trust. We show that the presence of AIgenerated smart replies serves to increase perceived trust between human communicators and that, when things go awry, the AI seems to be perceived as a coercive agent, allowing it to function like a moral crumple zone and lessen the responsibility assigned to the other human communicator. These findings suggest that smart replies could be used to improve relationships and perceptions of conversational outcomes between interlocutors. Our findings also add to existing literature regarding perceived agency in smart agents by illustrating that in this type of AI-MC, the AI is considered to have agency only when communication goes awry.

This article was originally published on

Related Tags