Facial recognition technology is being utilized to dig up the identity of a person who allegedly assaulted a police officer during a June 1 protest in Washington, D.C. The protestor is said to have punched an officer in the face before “disappearing into the chaos,” according to the charge sheet against him.
According to court documents reported by The Washington Post, the protestor was identified using an image of the man found on Twitter. This image was uploaded into a facial recognition system with a positive match.
The protest, which took place just outside the White House in Lafayette Square, devolved into chaos after Secret Service officers and U.S. Park Police fired tear gas and smoke devices to clear the area. The anti-protest action has largely been seen as excessive use of force by law enforcement — all so President Trump could walk from the White House to St. John’s church for a photo op.
We didn’t even know about this system — The use of facial recognition software, meanwhile, is seen by civil liberties organizations as a worrisome invasion of privacy that could set dangerous precedents, especially when it comes to the inevitable false positives systems produce. In this case, that problem is compounded by a fun fact: we didn’t even know this system existed.
Police used a system called the National Capital Region Facial Recognition Investigative Leads System (NCRFRILS), in this case. Apparently, NCRFRILS contains a database of more than 1.4 million people right now — not nearing the enormity of systems like still-popular software from Clearview AI, but sizable nonetheless. NCRFRILS has reportedly been used more than 12,000 times since last year.
The lack of disclosure around NCRFRILS is disturbing. The system has been around since 2017, but the public has never been clued into its existence. A spokesperson said that’s because the program is still in a “test phase.”
Systematic oppression — Law enforcement favors facial recognition technology as an upgraded system of public surveillance. In the case of NCRFRILS, Police Major Christian Quinn of Fairfax County says the system has provided leads in cases that might have otherwise gone unsolved.
Quinn says the software wouldn’t usually be deployed in peaceful protests — it was used in this circumstance because a protestor had allegedly committed crimes. He says he “would not usher in a tool that imposes on people’s right to privacy anonymity, and civil rights.”
And yet law enforcement has been seen to do just this, time and time again. Facial recognition software is being weaponized by law enforcement on a consistent basis — so much so that big companies like IBM and entire cities like Portland have banned their use entirely.
Facial recognition software is also deeply flawed. A federal study last year proved damning racial bias in facial recognition systems with people of color up to 100 times more likely to be misidentified than white men. Indigenous people had the highest false-positive rate across all ethnicities.
We're not saying that punching police is a good thing, it's not. But facial recognition’s growing popularity with law enforcement comes at the expense of personal privacy. Despite every warning sign, these systems continue to be implemented with increasing frequency, and even if you have nothing to hide or no intention of assaulting a police officer, it's worrying.