Report: Face identification startup Clearview AI is an absolute dystopian nightmare

It's the end of privacy as we know it, and we don’t feel fine.

Dimitri Otis/Photodisc/Getty Images

If we needed further evidence that this is not, in fact, the best of all possible worlds and that the notion of privacy in the 21st century is entirely illusory, The New York Times reports a start-up called Clearview AI is using 3 billion images people have posted to social media, YouTube, and even Venmo to help law enforcement identify them.

Clearview AI’s product is essentially a reverse image search engine that compares an uploaded image to its extensive database. Most social networks’ terms of service forbid the sort or scraping needed to create Clearview’s repository… but without explicit legislation and the accompanying threat of financially ruinous penalties, who’s going to stop them? No one, it turns out.

Law enforcement is Clearview’s main customer — According to The Times, over 600 law enforcement agencies — both federal and state, and including the F.B.I., Department of Homeland Security, and the Florida Police Department — have used Clearview’s services in the last year, as have various, unnamed private companies “for security purposes.”

For added dystopian flare, Clearview’s product includes code that would enable its use with augmented reality glasses, meaning a suitably equipped stranger could identify you at a glance. Though the service’s inventor, an Australian tech entrepreneur named Hoan Ton-That, says there are no plans to commercialize the AR functionality.

Ton-That’s previous, unsuccessful ventures bizarrely (but perhaps fittingly) included an app that let users digitally superimpose Donald Trump’s distinctive hairdo on other people in uploaded photos.

What’s been scraped can’t be un-scraped — Clearview claims its product includes over 3 billion photos, which it says dwarfs the F.B.I.’s database of 411 million, or the L.A.P.D.’s 8 million. And even if you remove your images from public-facing services or change your privacy settings, as it stands Clearview doesn’t remove content from its database that’s no longer available where it originally found it (though it claims to be considering doing so).

Clearview’s marketing materials, obtained through a public-records request in Atlanta.The New York Times

The company says its tool finds a match up to 75 percent of the time. But it hasn’t been independently tested to check for false positives, which also means we don’t know whether it’s prone to the same inaccuracies regarding identifying people of color that plague other facial recognition services.

As The Times points out, Clearview’s built the perfect tool for oppressive government’s looking to identify dissenting protestors, or closer to home, for would-be stalkers to find out all about an attractive stranger.

The backers aren’t comforting, either — Ton-That’s co-founder is Richard Schwartz, a former aide to Rudy Giuliani, now the President’s personal lawyer who is deeply embroiled in the Ukrainian debacle that’s triggered the impeachment proceedings. Oh, and one of Clearview’s early investors is Peter Thiel, who famously bankrupted Gawker.

In the words of Al Gidari, a privacy professor at Stanford Law School interviewed by The Times, without legislation to govern these sorts of applications, “we’re all screwed.”