Facebook founder Mark Zuckerberg announced Wednesday the rollout of new artificial intelligence and image-recognition tools to prevent so-called “revenge porn” from being circulated over and over on Facebook’s platforms, including Instagram and Facebook Messenger.

“It’s wrong, it’s hurtful,” Zuckerberg announced to the 88 million people who follow him on the site.

Notably, the new technology will stop the re-uploading of a flagged image or video, which has been the real problem. While there will always be awful people who want to post revenge porn, it looks like now Facebook will be able to stop it from being re-posted after it’s removed.

Zuckerberg’s status update followed a more detailed blog post on Facebook’s corporate site by Antigone Davis, Head of Global Safety for the company, offering three points about how it works:

  • If you see an intimate image on Facebook that looks like it was shared without permission, you can report it by using the “Report” link that appears when you tap on the downward arrow or “…” next to a post.
  • Specially trained representatives from our Community Operations team review the image and remove it if it violates our Community Standards. In most cases, we will also disable the account for sharing intimate images without permission. We offer an appeals process if someone believes an image was taken down in error.
  • We then use photo-matching technologies to help thwart further attempts to share the image on Facebook, Messenger and Instagram. If someone tries to share the image after it’s been reported and removed, we will alert them that it violates our policies and that we have stopped their attempt to share it.

Revenge porn has been a problem across the internet for the better part of a decade. Slimeballs like Hunter Moore quickly became infamous for hosting images and grainy video in the blog era. As social media platforms like Facebook have become destinations for online conversation in following years, it’s only gotten worse. Revenge porn has also moved live-streaming: Sexual assaults are happening on Facebook Live. This presents a new problem for the company.

You might say that Facebook, inarguably the biggest social website the world’s ever seen, is very late in addressing this issue, although the matter of developing reliable artificial intelligence and image recognition technology is something to consider, as the company handles 1.23 billion users — 17 percent of the world’s population — on a daily basis.

So why now? The initiative is one of five pillars included in Zuckerberg’s manifesto, published in February, that aims to use A.I. to better make the site safer for everybody and “prevent harm.”

It hasn’t come a moment too soon: In October, a 14-year-old in Belfast, Northern Ireland, claimed in a court filing that Facebook was liable for hosting naked photos of her on a “shame page,” reported The Guardian. In that case, Facebook removed the photo several times after it was reported, but it kept reappearing. The new tech announced Wednesday to stop repeat posting seems to have solved that problem.

Revenge porn of Tiziana Cantone, a 31-year-old woman from Italy, contributed to her suicide in September after it was was circulated enough to become a cruel meme. After lengthy proceedings, she achieved Europe’s “right to be forgotten” from search engines and Facebook. On March 1, Facebook announced its suicide prevention toolkit to “help connect a person in distress with people who can support them.”

Davis notes in his Facebook blog post that a study by the Cyber Civil Rights Initiative finds nearly all — 93 percent — of people surveyed reported “significant emotional distress, and 82 percent reported significant impairment in social, occupational or other important areas of their life.

Zuckerberg’s made clear that he hopes to eliminate Facebook’s role in the proliferation of revenge porn.

“If you report it to us, we will now use A.I. and image recognition to prevent it from being shared across all of our platforms.”