On Day 2 of its 2022 artificial intelligence conference, sandwiched in between the reveal of an Alexa-powered home robot called Astro and a presentation from the director of the MIT Space Exploration Initiative, Amazon unveiled something strange.
“Can grandma finish reading me The Wizard of Oz?” a young boy asks as the tech demo begins to play. “Okay,” Alex responds in her typically cheery, artificial voice. Then, a second later, a different voice emerges from the machine, this one much more human-sounding. But it’s not. It’s an AI-generated replication of the kid’s dead grandma.
Speaking onstage, Amazon senior vice president and Head Scientist for Alexa Rohit Prasad explained that the voice was generated using “less than a minute of recording” of the grandmother’s actual voice. The implication is obvious.
“These attributes have become more important in these times of the ongoing pandemic when so many of us have lost someone we love,” Prasad said. “While AI can’t eliminate that pain of loss, it can definitely make their memories last.”
For now, this is just a tech demo. Little else is known about the project except for the fact that Alexa’s engineers pulled it off by treating this as a “voice conversion task and not a speech generation task.” Amazon declined to comment on questions about how the feature works, when it will be rolled out, or who will have access to it.
“We are unquestionably living in the golden era of AI, where our dreams and science fictions are becoming a reality,” Prasad said at the conference. But will that technology actually improve our lives, or will it lead to new nightmares?
“What could go wrong?”
Maura Grossman, a research professor in the School of Computer Science at the University of Waterloo and an expert in the ethics of AI, sees both the good and bad potential.
“You can see the risk of deep fakes, somebody taking your voice, and all of a sudden you're telling me to transfer money from your account or whatever,” she tells Inverse. “So I think you have to ask, ‘What could go right with this and, and the more important question, what could go wrong with this technology?’.”
Yet there is the possibility for good too. Grossman has a dear friend whose husband passed away and who goes to a medium to connect with him. While Grossman was originally spooked and worried her friend would get defrauded or encounter some psychological trauma, the medium ended up helping her process the loss.
“Was it such a bad thing to visit a medium every other week and converse with her deceased husband?” Grossman asks. “On the one hand, it’s a distortion of reality. On the other hand, it did help her get through this.”
Not everyone can afford to visit a psychic (or believes in the concept in the first place), but an AI capable of perfectly imitating a dead loved one could provide a similar experience for anyone struggling with the loss of a loved one.
“On the one hand, it’s a distortion of reality. On the other hand, it did help her get through this.”
Angela Sheldon, a sixty-two-year-old from Northern Virginia, would love to use Alexa to have a chat with her dead mother — who she tries to speak to now every day anyway. “As it is now, I get no response,” Sheldon tells Inverse. “So even if I just heard ‘Yes I understand’... I would probably be happy.”
But excitement isn’t as rife across the board. For many, this invention feels like something out of Netflix’s dystopian sci-fi anthology Black Mirror. In the Season 2 episode, “Be Right Back,” a woman played by Hayley Atwell pays a company to create an AI replica of her dead boyfriend (lifelike body and all). At first, he seems like a perfect copy, but she becomes increasingly frustrated by the tiny differences and ultimately ends up locking him in the attic.
“Yes, it’s disturbing”
Since the invention of computers, digital immortality has intrigued humans, and as AI grows more advanced, the concept suddenly seems within our reach. Grossman points me to the story of a man recording his dying father’s story to recreate him using AI by creating a “Dadbot.”
Similarly, journalist-turned-tech-entrepreneur Eugenia Kuyda used a Google neural network to develop Luka, a bot that pulled images, texts, and audio from a friend who had been tragically killed in a car accident in order to recreate a version of him she could grieve with. This personal project pivoted into an app called Replika, which anybody can feed personal information into in order to recreate somebody as a “virtual AI friend.” As a birthday present, Kanye West got Kim Kardashian a visit from her late father via a speaking hologram.
“A person does not stop growing. The bot will forever be static.”
In 2020, Microsoft was granted a patent to create a similar system using images, voice data, social media posts, electronic messages, and written letters to create a chatbot that can act in the voice of somebody, replicating how they would behave. The patent reads: The specific person may correspond to a past or present entity (or a version thereof), such as a friend, a relative, an acquaintance, a celebrity, a fictional character, a historical figure, a random entity etc. It also mentions using 2D or 3D models to make these bots come to life.
However, it seems Microsoft doesn’t actually have any plans to follow through with this project. After the patent made news in 2021, Tim O’Brien, ethical AI Advocacy at Microsoft, posted on Twitter that it was filed in 2017, meaning it “predates the AI ethics reviews we do today.” He added, “and yes, it’s disturbing.”
Angeliki Kerasidou, an associate professor at the Ethox Centre at University of Oxford points out another simple issue with using AI to replace individual people.
“A person does not stop growing,” she says. “The bot will forever be static, even if you can learn from the interactions. We're not talking about some kind of super intelligence here. So we might be able to reproduce certain active actions, but I don't think that we will be able to reproduce the person in itself.”