In a recent study published in the journal Frontiers in Robotics and AI, researchers designed a new AI system to detect peoples’ laughs, decide whether to giggle in response, and choose the type of laugh that’s appropriate for the context.
This new design might help liven up chats between people and robots in an increasingly digital world.
“I hope we can foster the idea that laughter should be a fundamental part of any conversational robot,” says study author Divesh Lala, a researcher who studies conversational robots at Kyoto University in Japan. “We have proposed the idea of shared laughter as one way to attack this issue.”
Here’s the background — The past decade has brought freaky, ultra-realistic AI-powered robots that can gab relatively easily with people. And it seems like each new gadget dives even deeper into the uncanny valley.
Take, for example, Sophia: the humanoid device was created by Hong Kong-based Hanson Robotics in 2016. She has since served as an ambassador for the United Nations and spoken at conferences around the world (including an infamous appearance at SXSW in which she claimed she will “destroy humans”).
There’s also the upcoming Tesla Optimus that the company will preview on September 30. Musk thinks robots will eventually mow lawns, care for the elderly, and serve as friends … as well as sex partners.
“Nowadays, these conversational agents and interactive avatars and so on are becoming more than just mere tools … it’s becoming much more than what we used to have when interacting with computers,” says Özge Nilay Yalçın, a cognitive scientist at Simon Fraser University in Canada.
Despite recent breakthroughs, scientists have struggled to make robots laugh — a crucial step that some experts feel could foster a genuine, empathetic relationship between humans and humanoids.
Previous work has mostly aimed to design robots that can detect people’s laughter, Lala says. But he and his team wanted to take things a step further. “If you can do this, then you can simply make a shared laughter system which just laughs when a person does,” he says.
What’s new — The Kyoto University scientists have created what they call a shared-laughter system that they hope to eventually program into talking robots.
Here’s how it works: When a person laughs, neural networks pick up on the sound. Then, a series of models that classify data decide whether to chuckle in response, and if so choose the type of laugh that's appropriate to reply with.
More specifically, the system can pick between a “social” or “mirthful” laugh, categories based on previous studies that have classified our chuckles.
Social laughs — which most people are unfortunately all too familiar with — fill silence rather than expressing genuine delight, while we use the mirthful variety in response to something genuinely funny (like a good DALL-E Mini meme).
Why it matters — This recent work is just the latest attempt to make robots appear more empathetic and help them form meaningful relationships with humans. After all, talking robots could one day care for our aging relatives and follow us around our homes, along with other particularly intimate applications.
Some experts claim that computers can only offer superficial empathy, and that it’s a doomed mission from the start. “Now machines are not content to show us they are smart; they pretend to care about our love lives and our children,” wrote Sherry Turkle, a sociologist and psychologist at the Massachusetts Institute of Technology, in her 2021 memoir The Empathy Diaries.
But according to Lala, the AI laughter system isn’t intended to replicate the real thing. “We do not claim that our robot can show true empathy, since this requires them to understand human nature as such,” he says. “During an interaction, the robot is merely trying to simulate what an empathetic human would do.”
In fact, Lala cautions that the AI system doesn’t necessarily detect humor. “It might do this inadvertently but it cannot know if what you are saying is funny, especially if you don't laugh yourself,” he explains. “Although this would be great if it can be accomplished, we do not want to oversell what we are doing. “
He still thinks the simulation could help robots better benefit humans, especially those experiencing social isolation, such as senior home residents. Even if a humanoid truly doesn’t understand its human companion, he says, it helps to have someone (or, rather, something) there to listen.
“One ethical issue is whether or not we want robots to lie and explicitly say things like ‘I understand what you're going through,’ and this is something we should consider,” he says.
What they did — The researchers trained the AI system with data gathered from a speed-dating experiment conducted between Kyoto University students and ERICA, a person-like android that was designed by the lab to study human-robot interaction. In this scenario, ERICA was voiced by amateur actresses sitting in another room.
They examined the audio from these sessions and identified over 3,000 individual laughs, which they sorted into the social and mirthful categories. The team also noted when the actress-operated ERICA copied the human chortles.
To test the finished product, the team crowdsourced over 30 people to listen to an audio recording of the AI system chatting with human subjects, including study author Koji Inoue.
They ran three different conditions: the main shared laughter system, one with no laughter at all, and a less nuanced one that always responds to human laughs solely with a social laugh. After listening, the crowdsourced subjects rated the shared laughter system the highest of all in terms of “empathy, naturalness, human-likeness, and understanding,” according to the study.
What’s next — Now, Lala and his colleagues are currently working on incorporating the AI system into the ERICA android, along with other conversational robots they’re tinkering with in their lab.
These future studies will be important to prove whether this system actually works, says Khiet P. Truong, a computational linguist at the University of Twente in the Netherlands who studies laughter. After all, she points out, the concept could go south pretty quickly.
“If [the robot is] laughing at the wrong moment, that’s going to crush your relationship with the agent,” she says. “It’s so difficult to create a laughing agent because the cost for error is very high.”
Looking into the future, the researchers think the robots could theoretically laugh on their own — not just when prompted by people. But that would require AI to actually pick up on humor. Researchers working in natural language processing, a field of AI that focuses on understanding how people write and speak, are now attempting to do that.
Yalçın wonders how we could ever explain, say, absurdist humor like the kind found in Monty Python to a computer.
“It is not very straightforward. It requires you to create a dataset that has all the cultural and social and personal interactions and all the complexity of the world,” she says. “But I would say that as a first step, this is a good study.”
Until we reach that point (if we ever do), the new AI system may prove to be highly beneficial as is.
Based on current technological progress, though, Lala says it could take up to two decades for humans to enjoy a bona fide conversation with robots. Besides the laughter component, machines also need to improve on skills like eye contact, taking turns talking, and showing interest in the other speaker.
“I think progress is incremental, but we still have a bit to do before we can all human-robot conversation a solved problem,” he says.