Tech

Twitter has no clue why its algorithms amplify right-leaning content

But, yeah, the algorithms are definitely boosting conservative politicians and media outlets.

Senator Ted Cruz (R-TX) holds up a cellphone during the confirmation for Supreme Court nominee Judge...
ANDREW CABALLERO-REYNOLDS/AFP/Getty Images

Twitter’s algorithms are disproportionately boosting right-leaning news outlets and conservative content, a new study by the company finds. And Twitter doesn’t really know why.

The social media giant published a blog post about its findings late last week, co-authored by Rumman Chowdury, director of software engineering, and Luca Belli, a staff machine learning researcher.

“We believe it’s critical to study the effects of machine learning (ML) on the public conversation and share our findings publicly,” the pair writes. “This effort is part of our ongoing work to look at algorithms across a range of topics.”

The study in question is a deep analysis of how Twitter’s algorithms treat political content. It covered politicians’ tweets from seven countries — Canada, France, Germany, Japan, Spain, the U.K., and the U.S. — along with political content from news outlets in these countries.

Twitter was able to draw some conclusions from the data collected, but more impressive is its willingness to admit so openly that it has lots left to learn.

Definitely some amplification — After analyzing millions of tweets from April 1 to August 15, 2020, Twitter found that political content does indeed end up amplified on an algorithmic timeline compared to a reverse-chronological timeline. This alone isn’t too surprising; algorithms are meant to boost content users are actively engaging with, and political content generally sparks strong feelings.

But not all political content was treated equally by Twitter’s algorithms. In every country studied except Germany, tweets posted by right-leaning politicians ended up with more algorithmic amplification than that of left-leaning politicians. The same was true of right-leaning news outlets.

As Twitter mentions in its blog post, algorithms are not “problematic by default.” Users’ interactions with algorithms are complex and systematic. An algorithmic system’s preference for one type of content over another cannot solely be attributed to the algorithm itself; it’s this interplay that makes social media algorithms difficult to study.

Toward algorithmic transparency — Twitter’s recent commitment to study this algorithmic interplay and publish its results is novel in the social media world. Its biggest competitor, Facebook, does study its own algorithms, but it only releases any of that research when placed at gunpoint. Even then it is heavily annotated to fit the company’s narrative.

Twitter’s studies are far from complete, but this research — along with August’s report about the platform’s photo-cropping algorithms — is a significant start. And, as far as we know, the company hasn’t banned any researchers for silly reasons or fed them incomplete data.

Many of whistleblower Frances Haugen’s claims rest on Facebook’s system of algorithms. Twitter’s gradual transparency should help the company avoid a similar situation on its own soil, though it will need to really ramp up its research projects to reach true transparency.