As anti-vaccine paranoia continues to reach newly frenzied lows, the internet’s gatekeepers are struggling to keep up. YouTube has decided the most efficient method of doing so is to completely ban all anti-vaccine content. You know, the solution that would’ve mitigated widespread misinformation damage if YouTube had implemented it ages ago.
Today YouTube woke up and realized that all the experts begging for it to wipe clean anti-vaccine information were actually right. A number of notable anti-vaccine YouTubers have now been banned from the site, including Robert F. Kennedy Jr. and Joseph Mercola, an “alternative medicine entrepreneur.”
YouTube had already banned COVID-19 misinformation, sending prominent misinformation-peddlers to hosting sites with less moderation. But lies and conspiracies about any other commonly used vaccines, like those for measles or the flu, were still very much allowed on YouTube. The fact that YouTube has only now banned this content — 18 months into a global pandemic, after many years of anti-vaccine misinformation being welcomed with open arms — is revealing of YouTube’s inability to govern itself.
YouTube pleads patience — YouTube, like most social media sites, is generally resistant to sweeping moderation policy change on the basis of allowing “truth” to thrive on its servers. This time YouTube’s reasoning for not banning all vaccine misinformation earlier is a little different.
“Developing robust policies takes time,” said Matt Halprin, YouTube’s president of global trust and safety. “We wanted to launch a policy that is comprehensive, enforceable with consistency and adequately addresses the challenge.”
Meanwhile, Facebook — you know, the one that drags its feet over everything that might threaten growth — banned all vaccine misinformation seven months ago.
An ongoing problem — Halprin did not go into any detail about why YouTube hadn’t prioritized anti-vaccine content policies in the context of the COVID-19 pandemic. The company’s refusal to do so until now is, unfortunately, par for the course. YouTube has become a prominent center for misinformation distribution in the last few years. In the lead-up to the 2020 Presidential Election, for example, voting misinformation thrived on the platform.
YouTube’s response is almost always the same: wait to take action until the problem hits critical mass. Or, in some cases, until the federal government gets involved.
By the time YouTube finally deals with any given moderation issue, it’s already had time to spread like wildfire, thanks to the company’s discoverability algorithms. It’s a dangerous playbook, but one YouTube refused to revise.