Facebook is Making A.I. Moderators to Flag Photos and Fight Terrorism
Facebook CEO Mark Zuckerberg posted a 6,000-word manifesto on Thursday which, as you can imagine, touched on a number of topics, but perhaps the most interesting section detailed some of the ways that the social network is using A.I. to make Facebook a safer place.
“One of our greatest opportunities to keep people safe is building artificial intelligence to understand more quickly and accurately what is happening across our community,” Zuckerberg wrote.
There are billions of posts, comments and messages across our services each day, and since it’s impossible to review all of them, we review content once it is reported to us. There have been terribly tragic events — like suicides, some live streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner. There are cases of bullying and harassment every day, that our team must be alerted to before we can help out. These stories show we must find a way to do more.
Zuckerberg says Facebook’s developers are making advanced A.I. that can navigate some incredibly tricky subtleties of human behavior and culture in order to flag the patently objectionable content. Although the technology is “very early” in development, Zuckerberg explains that a system that can view photos and videos, recognize what it’s looking at, and flag inappropriate content already generates about a third of all Facebook’s reports already.
While Facebook continues to work on the A.I. that handles offensive visual media, it’s also going to work on another system that can combat terrorism. Facebook, with its massive audience and reach, can be used as a tool for terrorists to contact and radicalize potential members regardless of borders. Zuckerberg says A.I. can thwart their efforts.
“We’re starting to explore ways to use A.I. to tell the difference between news stories about terrorism and actual terrorist propaganda, so we can quickly remove anyone trying to use our services to recruit for a terrorist organization,” he wrote. “This is technically difficult as it requires building A.I. that can read and understand news, but we need to work on this to help fight terrorism worldwide.”
Zuckerberg’s acknowledgment that building an A.I. that understands the news is key. Facebook has run into trouble on several occasions for what it has or has not deemed to be objectionable. Human moderators already caught flak for censoring an iconic photo that demonstrates the horrors of war. Asking Facebook’s A.I. — which has already made a mess of the trending topics section of the site and fallen for “fake news” — to comprehend the complexities of news and culture is a big deal.
Still, it’s a worthy goal to pursue, especially since fallible human moderators can’t do everything (not to mention the toll it takes on the workers). Zuckerberg and Co. should just expect a bit of a rocky road.