Facebook weighed emoji reactions much heavier than a 'Like'


The past weight of a single "Angry" emoji versus one "Like."

The Washington Post


Are you tired of dunking on Facebook yet? Good, ‘cause we sure as hell aren’t, either. A new report courtesy of The Washington Post reveals that, for years following their debut in 2017, Facebook weighted “emoji” reactions to user posts five times more than that of a single “like.”

The decision is easy enough to understand — anything that prompts people to opt for a pointedly emotional response versus a neutral-to-positive “like” often results in more time spent on Facebook, thus more time giving the company valuable user data and potential ad revenues. Of course, a problem surfaced almost immediately, explains the WaPo:

Facebook’s own researchers were quick to suspect a critical flaw. Favoring ‘controversial’ posts — including those that make users angry — could open ‘the door to more spam/abuse/clickbait inadvertently,’ a staffer, whose name was redacted, wrote in one of the internal documents.

Did this happen for billions of users? You better believe it. Did Facebook do anything to change that? Barely.

The human equivalent of the “blank face” emoji.San Francisco Chronicle/Hearst Newspapers via Getty Images/Hearst Newspapers/Getty Images

An (eventual) change in approach — Roughly four years after their introduction and outsized weight metric, Facebook finally adjusted how emoji responses factor into users’ News Feed algorithms. “Angry” responses are now a net-zero effect, with “Love” and “Sad” now worth that of two “Likes.” Facebook didn’t change this because of the obviously negative effects on society, though... no, it made the switch after receiving an inordinate amount of evidence showing that its users just “didn’t like” seeing them.

Manipulating societal emotion — The entire WaPo piece is well worth a read, if nothing else for the reminder that, back in 2012, Facebook engineers were straight-up experimenting with societal emotions. “An experiment in 2012 that was published in 2014 sought to manipulate the emotional valence of posts shown in users’ feeds to be more positive or more negative, and then observed whether their own posts changed to match those moods, raising ethical concerns,” the article recounts. Extrapolate that across millions upon millions of users, and... yeah. It’s creepy as hell.

You should delete your account if you haven’t yet, by the way.