Culture

Clearview AI has signed a contract with ICE to offer 'mission support'

$224,000

The value of the contract between the controversial facial recognition service and Immigration and Customs Enforcement.

Tech Inquiry

Photo Beto/iStock Unreleased/Getty Images

Despite critical media coverage and a series of legal actions, Clearview AI continues to chug along: the controversial facial recognition startup just signed a $224,000 contract with Immigration and Customs Enforcement (ICE) to offer "mission support." The company told The Verge that ICE will use its technology in its Child Exploitation Unit and in ongoing criminal investigations. Non-profit accountability organization Tech Inquiry first spotted the deal.

Facial recognition software is controversial because of its inaccuracy, potential for abuse, and privacy implications. Government agencies including ICE have already begun using the technology to identify suspects in crimes, matching security camera footage against pictures in driver's license databases. That has led to false arrests as the technology is notorious for misidentifying people of color — especially concerning considering how dangerous police confrontations can be even when a suspect is innocent. Clearview claims its technology makes accurate matches 75 percent of the time, but that's never been independently tested.

Social media surveillance — Clearview AI found its way into the spotlight when it was discovered the company uses pictures scraped from social media to create a searchable database of faces. The ACLU sued the company over the practice, saying that it is violating Illinois state law by collecting such personal information on citizens without their consent. Social media companies also sent the company cease and desist orders for violating their policies against scraping content from their services.

Major retailers including Walmart and Macy's have implemented some form of facial recognition technology in their stores in order to deter theft. The idea is that store employees can flag suspicious customers and be alerted if they return again. Rite-Aid recently pulled the technology from its stores after an investigation by Reuters, saying its effectiveness wasn't proven and the risk of false positives was too high.

Clearview AI stopped selling its services to private companies in response to the backlash against it. But considering ICE's brutality against children at the border, that's kind of a meaningless olive branch. The technology could theoretically be used by ICE to tag people crossing the border and then find them out in public, pushing immigrants further into the fringes of society and making there already difficult lives even harder.

Ripe for abuse — Critics warn that facial recognition technology like Clearview's could be abused by oppressive governments looking to identify protestors, or stalkers could use it to track down an attractive stranger they see out in public. This type of concern isn't hypothetical: in the past, it's been discovered that phone-tracking services used by police to hunt down suspects ended up in the hands of stalkers. The FCC fined cell phone carriers $200 million for allowing it to happen. Meanwhile, in its early days, Clearview was being used by private citizens, including in at least one instance for a father to secretly vet his daughter's date.

Fortunately, the coronavirus pandemic has thrown a bit of a wrench into the facial recognition industry as masks make it much more difficult for the algorithms to identify a person. The National Institute of Standards and Technology (NIST) recently released a study finding that masks reduce the accuracy rate of facial recognition software to anywhere between 5 and 50 percent. Facial recognition technology uses signals distinct to each person's face, such as the distance between their nose and mouth, in order to make a match. When those signals are lost, they struggle a lot.

But when the pandemic is over, Clearview AI's technology will still remain, as will other services like it. Which is why it's essential regulators create an environment where it can't be abused.