Police officers in Miami and New York City — as well as several other cities — have been using facial recognition software to find and arrest Black Lives Matters protestors, according to a new report from Ars Technica. The AI-powered software was allegedly used only in tracking down those who had conducted illegal activities during the protests.
“If someone is peacefully protesting and not committing a crime, we cannot use it against them,” said Armando Aguilar, Miami’s Police Assistant Chief, in an interview this week. “We have used the technology to identify violent protesters who assaulted police officers, who damaged police property, who set property on fire. We have made several arrests in those cases, and more arrests are coming in the near future.”
The New York Police Department (NYPD) also used facial recognition technology to identify Derrick Ingram, a protestor accused of causing hearing damage to an NYPD officer at a Black Lives Matter protest. Ingram allegedly shouted into an officer’s ear with a megaphone.
In both of these instances — and other, similar cases — law enforcement reportedly used Clearview AI’s controversial facial recognition software to identify protestors from photos posted on social media. We’re watching technological innovation being used against citizens to undermine their rights in real-time.
Not just big cities — While the arrests in NYC and Miami are receiving the most attention in the media, it’s quickly becoming clear that police forces all over the U.S. are using Clearview AI or similar software to identify and track protestors.
Police in Columbia, South Carolina, for instance, used another brand of facial recognition software to arrest several protestors this week. Similar tactics were also used to find protestors in Philadelphia. In all of these instances, photos posted to social media were utilized as the basis for identifying protestors.
Mass surveillance — Clearview AI is paving the way for a new nightmare of mass surveillance: one where even our social media posts can be used against us. The company has risen to infamy as a top investment choice for law enforcement offices because of its massive database of faces scraped from every corner of the internet.
The response to Clearview AI’s technology has been divided almost entirely down lines of power. Police departments favor the software and other tools like it for how easy they make it to identify people. Civil rights advocates and social media companies detest it for the same reasons, and because of the propensity for false positives and lower accuracy when used to identify people of color.
Clearview AI’s founder, Hoan Ton-That, has said his software is protected under the First Amendment because all of the information it uses is posted publicly on the internet. But he also admits Clearview AI’s software could be used to create “a dystopian future or something.”
Mass exodus — Even as police ramp up their inclusion of facial recognition software in their daily operations, other Big Tech companies are doing the opposite.
IBM, long one of the tech industry’s leaders, called a complete moratorium on its facial recognition technology development not long after the murder of George Floyd. The company said also that it firmly opposes the use of any technology for mass surveillance and called for police reform. Amazon and Microsoft made similar statements, though their moves are temporary, at least for now.
The debate over how and when it’s responsible to use this kind of tech proves we still have a long way to go in creating comprehensive policies around facial recognition technology. There’s just no framework for this sort of thing — our policies haven’t caught up to our realities.
In the meantime, we’ll just remind you to wear a mask — both because it reduces the odds of facial recognition tech working to identify you, and because it reduces the spread of COVID-19 — and to take advantage of anonymizing camera apps.