Tech

Rite Aid secretly had facial recognition cameras scanning customers in hundreds of stores

The cameras were placed predominantly inside stores in low-income areas.

LIONEL BONAVENTURE/AFP/Getty Images

Convenience store chain Rite Aid has turned off hundreds of facial recognition cameras that it had quietly installed in its stores across the U.S. The company did so only a week after receiving questions about the system from Reuters.

An investigation found that Rite Aid deployed the cameras primarily in low-income, non-white neighborhoods in order to "deter theft and protect employees." The cameras matched images of customers entering a store with those that Rite Aid had previously identified as having captured people engaging in criminal activity. When a match was made, an alert would be sent to a security agent who could check the match for accuracy and ask a customer to leave.

Rite Aid told Reuters that implementation of the facial recognition software resulted in less violence and organized crime in the company's stores, but also said the decision to end the program was "based on a larger industry conversation" around the efficacy of the technology.

Facial recognition software can be bad — Facial recognition software has faced increasing criticism for its invasiveness and poor accuracy. People of color are disproportionately misidentified by the technology, leading to serious harm. Last month, Detroit police falsely arrested a Black man in a robbery case after facial recognition identified him as a suspect. He later said the picture of the suspect captured on security cameras didn't even look like him.

Walmart has also implemented facial recognition software to catch shoplifters. Store employees have reported that it's more of a nuisance than anything because it sends too many false positives.

Justin Sullivan/Getty Images News/Getty Images

Less tolerance for bugs — The technology is worrying because it's been proven to perpetuate the disproportionate incarceration of people of color in the country. Other factors like masks may even make the situation worse, as accuracy rates decline significantly when they're obscuring a face. The stakes of having bugs in a social media app are low, but when it comes to something as serious as this, software with known flaws should be implemented with caution.

Privacy advocates have been calling for a greater discussion around such surveillance and the possibility for regulation to protect the public from unlawful search and seizure.