Facebook is suing a dev for software that evades its restrictions on COVID-19 ads

The software showed Facebook's ad review system different content to what end users saw.

Blank template  for outdoor advertising or Blank billboard with a background of city. With clipping ...

Facebook is suing a developer for allegedly selling software that tricked the company's ad review system into publishing misleading COVID-19 ads. The company has been working to stop the spread of misinformation surrounding the deadly virus, but has been stumbling in some instances.

The idea behind the software, from a company called LeadCloak, is that it shows Facebook's ad review system an innocent looking website and then shows users something completely different. Facebook specifically says in its lawsuit that the software was used to conceal websites peddling scams related to the coronavirus. As deaths have risen in the U.S. and the CDC begins advising people to wear masks, it's no surprise some people are trying to exploit the ongoing fear and uncertainty.

Setting examples — Facebook has filed quite a few lawsuits against malicious developers in recent years. The company aims to make an example of bad actors so it can avoid taking responsibility for another scandal like Cambridge Analytica, where it was found that a developer of innocuous quizzes was collecting data from Facebook users and then selling it to a political consulting firm. Facebook ended up paying a $5 billion settlement over that, not accounting for all the public trust it lost. It hopes that by suing LeadCloak it can also track down some of the businesses or individuals who used the software to post misleading ads.

The platform problem — The LeadCloak situation highlights a direct problem with Facebook's fundamental model as a platform. Most advertisements sold on Facebook are never actually reviewed by a human but rather by automated systems, because it's far cheaper and far more scalable. That means people are incentivized to find new and inventive ways to trick the review system. It doesn't help that Facebook sent its contract moderators home and is instead paying full-time employees to take up some of the slack. That means there is more room for errors.

CEO Mark Zuckerberg said in a recent press call, “Our goal is to make it so that as much of the content as we take down as possible, our systems can identify proactively before people need to look at it at all.” He went on to say that by the time a user flags a post, “a bunch of people have already been exposed to it, whereas if our AI systems can get it upfront, that’s obviously the ideal.”

Some dangerous COVID-19 ads were recently submitted to Facebook by Consumer Reports as a test to see if they'd be approved, including one that recommended consuming small daily doses of bleach to stay healthy. They were approved by Facebook, though Consumer Reports did not run them.