Culture

YouTube doesn't trust AI but thinks it's fine for vetting kids' content

The company has been sending mixed signals about whether or not it trusts automated content moderation solutions.

A human moderator can be seen correcting posts and removing a flagged one in the color red.
Shutterstock

YouTube has announced it plans to harness artificial intelligence (AI) to root out videos that are inappropriate for children on its platform. Which would be great, if the Google-owned behemoth of video streaming hadn't just this week said that machine-based moderation doesn't work.

In a company blog post, YouTube announced:

Going forward, we will build on our approach of using machine learning to detect content for review, by developing and adapting our technology to help us automatically apply age-restrictions. Uploaders can appeal the decision if they believe it was incorrectly applied. For creators in the YouTube Partner Program, we expect these automated age-restrictions to have little to no impact on revenue, as most of these videos also violate our advertiser-friendly guidelines and therefore have limited or no ads.

Mixed signals — Soon after the COVID-19 pandemic began and essentially turned various digital and social media businesses upside down, YouTube decided to bring back human moderators, because AI-based ones were too enthusiastic. The issue came to light after the company's chief product officer Neal Mohan pointed out aggressive video removals by bots.

With the help of human moderators, there would be a much-needed re-introduction of nuance and complexity. At least, that was the idea. Plus, the likelihood of inoffensive videos being erroneously flagged by bots would be reduced. However, bringing bots back into the equation on something as important as protecting minors is a dangerous gamble. If the AI gets it wrong, YouTube could expose children to objectionable content, albeit inadvertently. However, if the AI turns out to be overzealous, it's likely to face less blowback than it would if the content it's tasked with vetting was intended for adults.

YouTube says that users who attempt to evade content restrictions by using third party routes will now have to sign in to see videos. This is to confirm whether or not they are indeed 18 or over. "This will help ensure that, no matter where a video is discovered, it will only be viewable by the appropriate audience," according to YouTube. However, as anyone who's visited an alcohol-related website will tell you, short of demanding photo ID (and, even then) vetting age online is hard.

Open to change — The good part is that YouTube, unlike Facebook which isn't so open to modifying its policies, seems comparatively amenable to making changes when it deems it necessary to do so.

"We understand that many are turning to YouTube at this time to find content that is both educational and entertaining," the company says. "We will continue to update our products and our policies with features that make sure when they do, they find content that is age-appropriate."