Tech

Facebook says it will nudge teens away from harm... eventually

Let's try adding another algorithm to the mix and see how it goes.

British opposition Liberal Democrat leader, Nick Clegg participates in the final of three live telev...
GARETH FULLER/AFP/Getty Images

In a rare move of concession, Facebook says it’s looking to add warnings that will nudge teenagers away from content that may be harmful to their mental health. Actually, maybe “concession” is too strong for what Facebook has in mind. Facebook VP of global affairs Nick Clegg said in a CNN interview Sunday that the company is planning to introduce some sort of feature to warn teens away from harmful viewing patterns.

“We’re going to introduce something I think will make a considerable difference, which is where our systems see that the teenager is looking at the same content over and over again and it’s content which may not be conducive to their well-being, we will nudge them to look at other content,” Clegg said in a “State of the Union interview.

Clegg offered no further details about when that feature might be available or what content, exactly, it might attempt to deter users from viewing. In the context of Facebook’s ongoing, very public reckoning, this feels very much like a band-aid. Not the good, sticky kind, either; just a dollar-store bandage that’ll fall off in the slightest breeze.

Two bandages — Clegg mentioned two new features in the works in his interview. The first will attempt to recognize when users are in a downward spiral of harmful content; the second will be something called “take a break,” which will essentially just prompt users to log off every once in a while. Notably, Clegg did not offer a timeline for these features or any further details about how they’ll work.

The tease doesn’t feel like the greatest PR strategy, given that Facebook is right now being urged toward much more transparency. We’d assume that, unlike TikTok’s similar feature, it will just take the form of a pop-up message.

Sidestep, sidestep, sidestep — During the same interview, Clegg was asked point-blank whether or not Facebook’s algorithms had amplified pro-insurrection voices leading up to the January 6 riots at the U.S. Capitol. Clegg attempted to sidestep this as well.

“I can’t give you a yes or no answer to the individual, personalized feeds that each person uses,” Clegg said. To which CNN’s Dana Bash wondered whether it might be considered problematic that Clegg wasn’t really sure whether or not Facebook had allowed that content to fester. We are wondering the same.

More algorithms? — During her testimony last week, Facebook-employee-turned-whistleblower Frances Haugen made it clear that Facebook’s problems run very deep indeed. The problem at hand involves the time users spend on certain content, yes, but the reason that time is harmful is because of Facebook’s very nature — especially in how its algorithms push users toward harmful content.

Logging off or nudging users away from this content is not going to fix these problems. These forthcoming features would be no more than a deflection from Facebook. A PR move, and not a very good one at that. Though that’s sort of par for the course at this point, isn’t it?

Here’s Facebook’s attempt at a quick solution: Fix nothing. Add another algorithm to warn teens when they’ve already gone too far down the rabbit hole. Rinse and repeat.