Culture

Zuckerberg Testimony: Facebook A.I. Will Curb Hate Speech in 5 to 10 Years

That seems like a long time.

Facebook wants to improve its ability to detect hate speech, but Mark Zuckerberg says it could take up to a decade before the company develops A.I. that’s clever enough to do it on its own.

On Tuesday, the young billionaire received the grilling of a lifetime as the Senate Judiciary and Commerce Committee interrogated him about Facebook’s role in everything from the manipulation of elections to the stealing of users private data by Cambridge Analytica.

After a question from Senator John Thune (R-SD) about why the public should believe that Facebook was earnestly working towards improving privacy, Zuckerberg essentially responded by saying that things are different now. Zuckerberg said that the platform is going through a “broad philosophical shift in how we approach our responsibility as a company.”

“We need to now take a more proactive view at policing the ecosystem,” he said.

In part, Zuckerberg was talking about hate speech and the various ways his platform has been used to seed misinformation. This prompted Thune to ask what steps Facebook was taking to improve its ability to define what is and what is not hate speech.

“Hate speech is one of the hardest,” Zuckerberg said. “Determining if something is hate speech is very linguistically nuanced. You need to understand what is a slur and whether something is hateful, and not just in English…”

Zuckerberg said that the company is increasingly developing AI tools to flag hate speech proactively, rather than relying on reactions from users and employees to flag offensive content. But according to the CEO, because flagging hate speech is so complex, he estimates it could take five to 10 years to create adequate A.I. “Today we’re just not there on that,” he said.

For now, Zuckerberg said, it’s still on users to flag offensive content. “We have people look at it, we have policies to try and make it as not subjective as possible, but until we get it more automated there is a higher error rate than I’m happy with,” he said.

Zuckerberg also said that by the end of 2018, Facebook would employ around 20,000 people whose sole job would be to work on security and content review.

It’s a little weird that Zuckerberg pegs the ability for Facebook to create competent AI at up to a decade; the service has already created one filter system that’s been employed by its sister platform, Instagram. DeepText, as Wire reported in 2017, is a machine learning technology that can flag words used together commonly in spam, as well as harassment, and flag it for removal.

But even when it launched on the platform, Instagram’s CEO Kevin Systrom told Wire that the technology is far from fool proof:

“It’s the classic problem,” he responded. “If you go for accuracy, you misclassify a bunch of stuff that was actually pretty good. So, you know, if you’re my friend and I’m just joking around with you, Instagram should let that through because you’re just joking around and I’m just giving you a hard time.… The thing we don’t want to do is have any instance where we block something that shouldn’t be blocked. The reality is it’s going to happen, so the question is: Is that margin of error worth it for all the really bad stuff that’s blocked?”

It’s worth giving Zuckerberg credit if he’s just being realistic about how difficult it is to limit hate speech. But if he wants to keep selling the line that Facebook is indeed a global community, he might not want to wait five to 10 years to protect it.

Related Tags