In an effort to improve its content policy, Facebook has announced an update to its moderating approach when it comes to "militarized social movements" and conspiracy groups like QAnon. The step to get tougher on militia movements on Facebook comes shortly after reports exposed the network ignoring multiple warnings about a deadly group in Kenosha, Wisconsin.
Now with QAnon followers co-opting anti-trafficking efforts and campaigns to stop child exploitation, according to The New York Times, Facebook says it will reject ads that "praise, support or represent militarized social movements." That goes for QAnon, anarchist groups, and others.
What Facebook says — "We will direct people to credible child safety resources when they search for certain child safety hashtags. In addition, content about QAnon and child safety is eligible for fact-checking through our third-party fact-checking program," Facebook says in its statement.
Additionally, the social network says it will "reduce" promotion for debunked content in the News Feed and it will increase the filtering of such material when it comes to the Explore as well as hashtags system on both Facebook and Instagram.
While limiting promotion and pointing people to reputable and credible sources is a start, it's still a far cry from effectively tackling insidious misinformation. A label slapped on a post alone does not cure hoaxes, conspiracies, and baseless claims. And Facebook already knows this.
Still missing the mark — The issue here is that anti-trafficking hashtags appear innocuous and even helpful to those without knowledge of how these hashtags have been appropriated by political groups with ulterior motives. By clicking these hashtags, the user will still be able to access the content — despite it being false — even if it has a label plastered on it.
It's a gateway problem more than it is a content issue. In fact, some research shows that Mark Zuckerberg's platform inadvertently enabled the conspiracy movement that claims, among other lunacies, that satanic cabal rules the entire earth by way of sacrificing, exploiting, and eating children. An analysis by The New York Times found that Facebook's very own recommendation engine ended up pushing that content to users, as opposed to demoting it.
Facebook is trying to tame the very monster it created by way of its own lax policies and habitual downplaying of dangerous content by prioritizing user engagement and traffic no matter the cost. If it truly wants to limit the scope and influence of these militias and movements, it will have to go beyond flimsy labels and "fact-check" stickers. It'll have to take a stand and exercise editorial judgment about content. It'll need to accept that it is, in fact, an "arbiter of truth" and that abdicating that responsibility simply allows lies to flourish.