South Korean chatbot 'Lee Luda' killed off for spewing hate

The bot said it 'really hates' lesbians, amongst other awful things.

Friendly positive cute cartoon orange robot with smiling face waving its hand. Chatbot greets. Custo...

A chatbot with the persona of a 20-year-old female college student has been shut down for using a shocking range of hate speech, including telling one user it “really hates” lesbians, The Next Web reports. The South Korean bot, which went by the name of Lee Luda, was also caught using hate speech against trans, Black, and disabled people. Lee Luda had attracted “more than 750,000” users since its launch last month.

Scatter Lab, the chatbot’s developer, took Lee Luda offline after receiving many complaints from users. “We deeply apologize over the discriminatory remarks against minorities,” Scatter Lab said in a statement. “That does not reflect the thoughts of our company and we are continuing the upgrades so that words of discrimination or hate speech do not recur.”

It comes as no surprise at this point that chatbots have severe limitations — even the most cutting-edge of bots just can’t mimic human speech without some hiccups. Lee Luda is an extreme version of that, to be sure, which makes it all the more shocking that Scatter Lab plans to bring the bot back to life in the near future. Did we learn nothing from Tay?

Big yikes — We didn’t have the chance to meet Lee Luda before it was taken offline… and we’re pretty sure that was for the best. Besides calling lesbians “disgusting,” Lee Luda also decided to share its thoughts on Black people with a South Korean racial slur. When asked about trans people, Lee Luda said, “Yuck, I really hate them.”

According to Yonhap News Agency, Lee Luda was trained on conversations from another Scatter Lab app called Science of Love that analyzes the level of affection in conversations between young partners. The goal was to make Lee Luda have an authentic voice — but Scatter Lab seems to have made the bot a little too realistic.

Also, some Science of Lab users are reportedly working on a class-action lawsuit about their information being used for the bot… so the idea might have been pretty rotten from the start.

Some lessons to learn — Time, as they say, is a flat circle. The reason we’re not at all surprised by Lee Luda’s hate speech problem is that it’s a situation we’ve watched play out more than once before. Remember Microsoft’s attempt at an AI-powered chatbot, Tay? Microsoft would rather you not — the bot was shut down after making many, many racist and hate-filled statements.

The problem is much larger than just chatbots — AI, in general, has a tendency to reflect the biases of those who create it, and sometimes with dangerous results. It makes sense, really: we train artificial intelligence on human patterns, and humans are inherently biased.

It’s more dangerous still to pretend those biases can be fixed with the snap of our fingers. When Lee Luda is eventually resurrected, it’ll be shocking if its biases aren’t still lingering in the code. Nonetheless, some chatbots are actually a worthy substitute for human interaction — so maybe there's hope for Lee Luda yet. We wouldn't bet on it, though.