Culture

Facebook banned blackface. Judging by Instagram filters, you’d never know.

Facebook is failing to effectively moderate its Spark AR filter creation tool, leaving it open to racist trolls.

CSA-Images/Vetta/Getty Images

Augmented reality (AR) filters have effectively revitalized the Instagram Story. Open the app’s camera and you’re presented with near-infinite possibilities: transform yourself into Tiger King, overlay your photo with falling leaves, or distort yourself into unrecognizable shapes and uncanny memes. If you’re feeling particularly adventurous, you can let a filter decide which character you are from your favorite movie or TV show.

Unless you take the time to dig into their roots, AR filters seem to appear on your Instagram as if by divine creation. You see a friend post a funny photo of them as Shrek and click the link to add the filter to your own photos. The creator of the filter is, most often, lost in the mix.

But there is labor in this process. People — designers, coders, bored college students — are putting dedicated effort into perfecting the art of filter-making. There are more creators than ever before, thanks to the tech’s recent popularity. Facebook says more than 400,000 people have published over 1.2 million AR filters on Facebook and Instagram as of this month.

Here’s the problem: as this brag-worthy community continues to grow, Facebook has invested very limited resources in moderating it. As with any public technology, there are people both ignorant and malicious whose creations are offensive or otherwise problematic. And Facebook is doing almost nothing to stop them.

A racially insensitive, user-created filter called “Harajuku” alters the shape of your face and makes your eyes narrower. Users have been reporting it since September, but it remains available.

Facebook created Spark AR as a tool for everyone. To that end, it’s quite simple to get started — just download the Spark AR Studio and watch some YouTube tutorials. You’ll be creating ridiculous AR filters in no time. But creating a tool like this for “everyone” is inherently a double-edged sword. Opening up your usability means people of all walks of life can use your technology. Bad people can use your tech, too.

There is a built-in mandatory review process for all filters submitted for public use. This review is meant to weed out creators attempting to use Spark AR to disseminate offensive or harmful information. In order to pass, a filter must meet both Spark AR’s own review policies and the community standards of Facebook and/or Instagram, depending upon where it’s submitted for approval.

But even on the most basic level — upholding Facebook’s overarching community standards — the company fails. Mingus New, an AR filter designer (and co-creator of Club Quarantine), tells Input that filters containing content generally allowed across Facebook and Instagram, like curse words and minimal nudity, are often turned down by the service. Yet, the approval process consistently allows filters with problematic and often downright offensive content to pass through.

Too many blackface filters to count.”

The examples Mingus has witnessed are numerous and astounding: a filter (created by a white man) where every time you eat a bat from a bowl of soup your eyes get more narrow; a filter (created by a white woman) called “Harajuku” that also makes your eyes more narrow; one that makes it appear as if the user has vitiligo. “Too many blackface filters to count,” Mingus says. While the bat filter appears to have since been removed since Mingus first reported it, "Harajuku" remains available. Input also confirmed numerous examples of blackface, many of which can be encountered in even a cursory search for terms like "Black Lives Matter."

Butts, though? Don’t even try it.

It takes very little effort to pull up examples of blackface in Instagram's filter library, like this one.

A Facebook spokesperson told Input that the approval process uses Facebook’s standard Hate Speech policies as their basis. “Blackface is explicitly not allowed,” the spokesperson stated. And yet there are many of them being approved — and being left up once reported, too.

Like most Facebook products, there is a method by which to report offensive filters. Unlike the quick reporting used for most Facebook and Instagram posts, though, you can’t report an offensive filter when you see it appear in someone’s story. You need to click through to the creator’s page and do so from there, or find it by searching the library. This extra step makes it more likely filters will go unreported.

The review process once a filter has been reported is inconsistent at best. Mingus says he’s had little success with using the official reporting feature on the Instagram app; he’s reported many racist filters with no success. The only time he’s been able to actually catch Facebook’s attention is when he takes the reporting to the dedicated “Spark AR Community” Facebook page. Several of the filters Mingus documented in September after reporting are still available to the public, Input confirmed.

The Spark AR Community is run by Facebook’s blue-check verified Spark AR Creators page and identifies itself as “a forum for creators to learn and grow, engage with other creators, find inspiration, build a network and influence the future of Spark AR Studio.” According to the page’s “Members” info, there are 30 admins and moderators of the group. Based on their public-facing Facebook profiles, these admins range from Facebook execs to seemingly random people from around the world, and the Spark AR Creators page itself.

It’s difficult to pin down where in the filter-creation ecosystem moderation goes wrong. Spark AR does have an approval system in place, but filters are being approved that are widely considered offensive.

“We might make mistakes or see some policy-violating effects slip through.”

What exactly is Facebook doing to moderate this ever-growing platform? Facebook made it clear back in March that it was suspending the human moderator approval process because its workers had been sent home for quarantine, but in April the company posted an update stating that it would be “evolving” its existing processes and automated system:

As we’re testing this new functionality, we might make mistakes or see some policy-violating effects slip through. We hope our community will report any effects they feel are inappropriate or violate our guidelines, so they can be investigated and removed as necessary. Your feedback will help us to maintain a safe community and continue to improve the process over the coming months.

When prompted for more information on this so-called evolution, a Facebook spokesperson told Input this post refers to issues that arose when the company began adding in more automated review. The company refused to comment on just how much of the process is automated at this time, stating instead that it’s a “combination of human and automated systems” with a “small chance” that violating effects can slip through.

The spokesperson pointed to Instagram’s reporting features as a method of catching those filters that “slip through.”

Facebook’s “hope” evidently has not paid off. The community of filter-designers (and Instagram users writ large) are indeed reporting incidents as they see them — but in many cases, nothing comes of it. Even the community's official Facebook group, which could be a great backup reporting resource, falls flat on its promises to foster an inclusive community around AR filters. Instead, it ends up reinforcing the very problems already present, with no end or recourse in sight.

Reporting filters in the official Facebook group seems no better. Mingus and other users have faced bullying, up to and including queerphobic comments, for doing so. Most often, he says he's met with disdain and silence. For the most part, his concerns fall on deaf ears. Rather than serving as a useful backup to Spark AR’s moderation system, the group instead highlights the problematic behavior rampant in the filter-making community.

Facebook’s general moderation problems have been the subject of much public debate as of late. The company’s AI moderation often fails to recognize actual violations while denying harmless photos; it’s blocked legitimate COVID-19 information and allowed the incitement of violence through paid ads. Before banning harmful conspiracy group QAnon from its platforms, Facebook played a major role in the group’s rise to prominence.

Filters can be particularly egregious given their use of augmented reality, technology capable of transforming users’ appearances in real-time. An old photo of someone in blackface is horrible enough; technology capable of putting blackface on an infinite number of users is far, far, worse. And Facebook seems entirely unable to keep up with the skyrocketing popularity of filter-making, whether due to a lack of human moderators or underwhelming approval standards.

The way forward, as with much of Facebook’s moderation, is unclear. A push for more resources to be devoted to moderation efforts specific to filters would surely be helpful, allowing for an increased human workforce that’s better able to spot issues. Facebook would also do well to be more specific in its guidelines, training moderators with more nuanced understandings of what “offensive” means in 2020.

As it stands, Spark AR has a moderation problem; not a small one, either. And right now the company doesn’t seem too pressed about changing that.