Facebook is offering a sliver of insight into which content it demotes

The new Transparency Hub page details how Facebook treats problematic or low-quality content.

Covid-19 Forth wave asia chinese / taiwanese /vietnamese have problem about online learning educatio...
primeimages/E+/Getty Images

Despite its frequent cries to the contrary, it’s a rare day when Facebook shares genuinely transparent information about what makes its platforms tick. Usually, we only hear about Facebook’s inner workings through leaks, and Mark Zuckerberg isn’t really a fan of that method of involuntary transparency.

This week we’ve been given a treat from the Worst Social Network You Know: a primer on what kind of content the company purposefully demotes in priority in users’ News Feeds. It’s meant to shed light on instances where Facebook might have to step in and slow the spread of a link or post.

Facebook published a new page entitled “Types of Content We Demote” on its Transparency Hub on Wednesday. The guidelines outline the posts that receive “reduced distribution” in News Feed. Facebook says it will continue to update the page as new guidelines are developed. More interesting than the list of content that gets demoted, though, is what Facebook purposefully leaves out of this transparency page — any information about how it goes about finding that content or what kind of points system it uses for demotion? Well, on those fronts we’ll just have to keep wondering.

A pretty standard list — The “Types of Content We Demote” page is very short, all things considered. There are three lists of content types usually flagged for demotion: one category called “Responding to People’s Direct Feedback”; another called “Incentivizing Creators to Invest in High-Quality and Accurate Content”; a third called “Fostering a Safer Community.” The lists are exactly what you’d expect if you’ve spent more than a few minutes on Facebook (or just reading news about the site). Each content type links out to a longer description of what the category constitutes, along with some examples.

The first category lists a dozen types of content related to user feedback — things like ad farms, clickbait links, and low-quality events. The second is mostly about limiting the spread of misinformation: inauthentic sharing, posts from broadly untrusted news publishers, unoriginal news articles, etc. The last category is about content deemed problematic, like content that borders on breaking community standards or unsafe reporting about suicide.

Still with an air of mystery — This wouldn’t be a Facebook transparency post if it told the whole story, you know? In this case, the missing information is how exactly the demotion process works in practice. Like, do spam posts get demoted by the same factor as misinformation posts? Is the demotion process carried out by humans or is it completely automated?

We didn’t expect these answers; not really. Facebook keeps as much of its day-to-day operations top secret as possible — as do most social media companies, for that matter. Letting the public in on that kind of information could make it far too easy for malicious users to game the system. Still, we’ll take any crumb of transparency we can from Facebook, even if the tidbits we get just end up raising more questions... and red flags.