Tech

Facebook's Blender chatbot is an AI with personality but a short memory

Even with 9.6 billion parameters, the chatbot still has some severe limitations, and not just in terms of its musical taste.

Move over, SmarterChild: Facebook has released an open-source chatbot called Blender that’s meant to mimic human speech patterns and empathy. The bot is called Blender because it uses a novel AI task called Blended Skill Talk, which — you guessed it — blends different skills together. In this case, that includes a mix of displays of personality, knowledge, and empathy.

Blender is the culmination of years of research over at Facebook. The program’s neural model uses up to 9.4 billion parameters to operate, which is about 3.6 times more than the largest existing system. Though Facebook acknowledges Blender is by no means perfect, it’s likely the closest the AI community has come to programming a computer to chat like a human.

Maybe Blender will be your new best friend in quarantine. Even with Blender's inherent shortcomings, it’s a big step forward for the AI community. Before long we could be seeing smarter chatbots used by all kinds of companies.

Becoming more human — Before Blender, the chatbot with the largest neural network was Google’s Meena, which was trained with about 2.6 billion parameters. Even with this massive library of training, Meena had notable shortcomings; it lacked personality, for one thing, and failed to account for factuality in its models.

Blender is built to further these efforts. The chatbot was first trained upon 1.5 billion publicly available Reddit conversations and then tweaked to include data sets with emotions, knowledge, and personality. As a result of its size, Blender can’t actually fit on a single device. It runs across two computing chips instead.

An example conversation with Blender.Facebook Artificial Intelligence

Facebook found that this additional training paid off. About 75 percent of human testers found Blender more engaging than Meena, while 67 percent of testers said it sounded more human. When reviewing chat logs, human evaluators weren’t able to tell the difference between Blender chats and human chats 49 percent of the time. That's not going to pass the Turing test. But it's a solid initial performance.

Still plenty of shortcomings — Blender is not perfect by any means, a fact Facebook acknowledges in its announcement of the technology. One major problem is Blender’s inherent misunderstanding of bias and toxic language. This isn’t a new problem — remember Microsoft’s Tay? — but it’s one that still stumps researchers. Humans are inescapably biased and at least sometimes toxic, often to extreme extents on the internet. Especially, as it happens, on Reddit. So perhaps using that as a training ground makes sense.

Blender’s conversations are currently limited to about 14 turns. It can’t remember conversation history much past this or factor it into subsequent responses. Blender also has a tendency to completely fabricate information. Which seems dangerous, given how much misinformation is already out there. But also seems amusingly human. In the face of things it doesn't know, it lies. And we've all met those people.

Nonetheless, Blender is a sign of significant progress in AI chatbot technology. Most chatbots right now have very limited use, but Facebook’s research shows there’s still room for those applications to expand substantially in the future.