Culture

Bumble will now automatically flag your body-shaming

And, after an automatic warning, additional infractions could cost you your account.

asian man is upset while using smart phone outdoor in the evening
Shutterstock

Bumble, the popular dating app that requires women to send the first message when matching with men, is utilizing its algorithms to automatically report body-shaming. The company announced today via a blog post that it has updated its terms and conditions to explicitly ban unsolicited comments about someone’s appearance body shape, size, or health.

“Bumble’s mission has always been to build a platform rooted in respect and kindness, and we’re taking another step to make our app safer for our community,” the company writes.

The announcement includes a lengthy (and generally well-nuanced) definition of body-shaming that’s broad enough to cover just about any instance of it. The statement says also that language deemed ableist, racist, homophobic, or transphobic will be automatically flagged as well.

For far too long, dating apps have allowed hate to thrive despite positioning themselves as spaces meant for cultivating connection. Bumble’s move is one other companies would do well to emulate.

One warning — Although body-shaming is now explicitly banned from Bumble, the app won’t suspend your account the first time you poke fun at another user’s appearance. Instead, the app’s AI will flag any instances of body-shaming with a quick pop-up warning, and the message will be sent to a human moderator for review.

If you keep using body-shaming language after that warning, though, Bumble will indeed ban you from its services. The company also says “particularly harmful” comments may result in a ban after their first use. And if the AI doesn’t flag a comment you see as body-shaming, you can report it manually so it can be reviewed by a moderator.

Ahead of the curve — It’s an unfortunate reality that many dating apps have become unfriendly spaces, overrun with hate speech and general disrespect. Meeting strangers online always comes with the risk of unfriendly experiences — and many companies put the onus on users to report those instances. This can feel like a burden and, overall, allows much more hate speech to thrive.

Finding a balance between privacy and moderation is difficult, though, particularly in the dating sphere. Tinder, for example, took such a hands-off approach last year that the service became overrun with horny spambots.

Bumble has found much success in utilizing AI toward this elusive balance. Back in 2019, the company introduced a tool called Private Detector to automatically blur nude photos and allow users to choose whether or not they’d like to view them. The tool has been 98 percent effective, according to Bumble.