In 2016, Amazon sold more than two million Echos, conversational interfaces built to house Alexa, an artificial intelligence personal assistant developed in 2014 that’s designed to “always get smarter.” Google hasn’t released its sales numbers for the Google Assistant enabled Home devices, which the search behemoth has started selling from pop-up retail outlets, but it’s estimated that the sale numbers quadrupled at Christmas. Millions of these devices now listen to millions of Americans going about their everyday lives, participating when songs need changing an Amazon orders need to be placed. But if proliferation has normalized the presence of voice-based A.I. systems, it has done little to help users clarify their relationships with the new help. This has become a source of banal jokes — “Okay Google, how was your day?” — but experts say that an emotional understanding of these machines, versus just a sense of their functionality, is critical to charting a path forward.
“Never before has humankind been challenged by its own creation to such an extent,” says Alexander Libin, Ph.D, a psychosocial scientist at Georgetown University to Inverse. “The balance between human-to-human and human-to-robot communication is very fragile at the moment. Some people see robots as just useful devices, and some of us look at interactive human-like computers as real substitutes for personal companions.”
Libin reasons that as long as humans have a grip on what’s real and what’s “an animated effect produced by the interactive technology,” our interactions with these machines won’t alter human relationship dynamics. People won’t begin to throw brusque formulations at each other in the manner of a connected home command. Still, it’s not as simple as making sure there isn’t social bleed. Relationship dynamics inevitably emerge from any interaction — and a hell of a lot faster when the interactions feel like they involve a sentient being. Which leads to the radical part: As people incorporate A.I into their lives, they are subconsciously creating a brand new social category.
Put another way, millions of people are socializing in a manner that is without precedent in their adult lives — adult being the operative word.
Julie Carpenter, a research fellow and social scientist at the Ethics + Emerging Sciences group at Cal Polytechnic State University, explains that humans tend to subconsciously attribute intent and autonomy to smart technology programmed to speak. When pressed, users still understand that Alexa is a system incapable of interacting in an emotive or intentional way. A man or woman using Alexa often treats the interface in the same manner a child interacts with a teddy bear. Despite understanding that Alexa- and Google Assistant-enabled devices are just gadgets, there is a desire to play house, to behave socially even though doing so is irrational.
“We’re still trying to figure out how much credence to give to this technology and where to fit it into our lives socially,” Carpenter tells Inverse. “ While people are aware that these don’t have agency, factors like user frequency and proximity can trigger some people to become emotionally engaged in the product, placing these machines into a novel social and cultural category.”
Because they have a social category, humans want to like their smart home devices. This leads to predictably amiable behaviors and conversational tones. After all, no one wants to live with a malevolent artificial intelligence — or believe that they do.
“The trick here is that we, humans, are the party who makes robots lovable, communicative, and charming,” explains Elena Libin, Ph.D. and founder of CyberAnthropology Inc. “It is our imagination that attributes those qualities to the artificial things as they are… for the time robots are merely an extension of our human skills, making things easier for us.”
That A.I. machines can be perceived as lovable in part stems from the fact that humans tend to easily feel emphatic about things, especially if the item has a placed importance in their lives. The late Clifford Nass, a pioneering researcher in human-technology relationships, was already explaining in 1995 that people will react to computers imbued with human-like personalities in a similar way to actual humans. In his research, people especially applied human-to-human social rules to computer relationships when they were made to feel like they shared a goal and an identity with a computer. Beyond this, they also were more likely to feel like the computer was similar to themselves when it was presented as part of a “team” pursuing a goal. Nass invented Clippy, the famously annoying animated Microsoft Word helper. People didn’t like that much, but even their dislike — based largely on the sentient paper clips anodyne personality — seemed personal.
People didn’t object to Clippy as a human or an interface, but as something else entirely.
Alexa presents as part of a domestic team. Reviews of the device are sprinkled with handicapped people saying that it has afforded them greater autonomy and others saying it has streamlined the process of organization. While Alexa absolutely does exist to help people, Amazon’s statement on its deep learning capacity to “see, hear, speak, understand, and interact with the world around you” is slightly overblown. “Understand” is a very complicated concept within the context of A.I. It can iterate and facilitate. It can assist and, in so doing, convince users that it understands some greater truths.
In fact, the greater truth at play has to do with the transitive property of humanity — the way it tends to get on all the tools and devices we build and use.
Reflected humanity and impressive engineering may have created a new social category, but some argue that this is largely trivial because it’s a waystation. Empathetic interfaces may be the future, but that creates new problems around trust and could lead to a legislative boondoggle after seemingly confident robots get people killed. And, yes, when consciousness is up for sale it likely will confuse human relationships. But that hasn’t happened yet. What has happened is that a mass audience has begun to figure out how to relate to alternative forms of intelligence. We haven’t changed, but our social spectrum has stretched.
“I don’t think how these machines will affect us will be a matter of transference — how I treat my robot in my home is not how I’m going to treat a human,” says Carpenter. “It’s not going to be that simplistic because humans are capable of understanding nuance. Instead, we are going to develop many, many social categories.”