Culture

Tinder's new safety feature uses AI to combat harassment

The app will ask "Does this bother you?"

NurPhoto/NurPhoto/Getty Images

Whenever I'm on a Tinder date and they ask me how my experience on the app has been, all I can think is: "probably better than yours." Even having seen the harassment women face on the app, I can't imagine what it's like to actually experience it. For me, the worst-case scenario is a boring conversation. For many women, it's lewd comments, negging, and more.

Now, Tinder is taking new measures to combat harassment with the help of artificial intelligence. Machine learning will screen messages to flag potential offensive language. If a message is flagged, the recipient will be asked, "Does this bother you?" If they choose yes, they'll then be able to report that user.

Going further — Another update coming soon won't just put the impetus on the recipient. It'll flag messages prior to sending and ask the user if they're sure they want to send it. Hopefully, that'll give pause and save someone from being harassed before it happens.

AI implementation comes alongside other measures to protect users, including a panic button. It comes as part of "Noonlight" integration, which will allow users to fill out a "Tinder Timeline" with who they're dating, as well as when and where. If something goes wrong, they'll be able to discreetly notify emergency services within the app. A new Selfie Verification feature will also users to secure blue checks to confirm their profiles as authentic.