After months of mixed results, YouTube is bringing back its human moderators, according to the Financial Times. YouTube’s chief product officer Neal Mohan cited overzealous takedowns by the company’s artificial intelligence-powered moderation algorithms as the main motivation for the shift. When YouTube’s moderators began working from home earlier this year, the platform’s shift to AI moderation caught many inoffensive YouTubers in its effort to cleanse the platform of misinformation and hate speech.
There’s nothing quite like a person — At first, it appeared that YouTube’s algorithms weren’t up to the challenge of rampant coronavirus misinformation videos. Even when this kind of moderation ramped up, it caught ordinary YouTubers in its net and the actually malicious videos just popped up again, like a phoenix out of the ashes of bogus cures.
YouTube doubled the number of takedowns it usually accomplished in the second quarter of this year, totaling about 11 million removed videos. Mohan told FT that 11 million is still just a drop in the bucket considering how many YouTube videos exist.
“Over 50 percent of those 11 million videos were removed without a single view by an actual YouTube user and over 80 percent were removed with less than 10 views,” Mohan told FT. He also underscored the technology’s ability to flag videos, rather than take them down unilaterally, so that human moderators can “make decisions that tend to be more nuanced, especially in areas like hate speech, or medical misinformation or harassment.”
Co-founder of First Draft, a non-profit organization focused on social media’s role in misinformation, Clair Wardle told FT that “We are a very long way from using artificial intelligence to make sense of problematic speech [such as] a three-hour rambling conspiracy video...[The machines] just can’t do it...Even humans struggle.”
We’re increasingly aware of the emotional and psychological effects of being a human moderator, but we still have a long way to go before this work is completely offloaded onto machines. Even these moderators are imperfect, imbuing decisions with their own biases or missing out on critical dog whistles because of an inability to account for nuance.
It pays to get it right — Social media’s responses to misinformation and hate speech are starting to affect advertising revenue, as witnessed in this summer’s Facebook boycott. If YouTube’s AI isn’t great at keeping bad actors off its platform or keeping the good ones on, it’s hard for advertisers to place their campaigns without incurring the wrath of the public or missing out on prime audiences.