The chatter about racial and gender bias in facial recognition technology has been brewing for a while – and now a new federal study confirms just how skewed some of the top algorithms really are. According to a study by the National Institute of Standards and Technology (NIST), false positives or error rates in facial recognition are affected most by a person's race, age or gender.
We’ve got to get this right — Concerns have grown among policymakers, politicians, privacy rights groups and activists about facial recognition. And rightfully so. These biases pose a significant threat to the safety and security of everyday people, especially people of color.
The U.S. agency was motivated to conduct the study based on "the recent expansion in the availability, capability, and use of face recognition has been accompanied by assertions that demographic dependencies could lead to accuracy variations and potential bias."
It's the first report to describe demographic differences in algorithm identification. The 189 algorithms from 99 developers were tested on photos ranging from FBI mugshots, visa application photos, and other federal database photos.
Microsoft was among the biggest tech names that participated. A noticeable miss: Amazon. Others included Toshiba, Panasonic, Intel, and Chinese-tech conglomerate, Tencent.
It’s only getting bigger — The use of facial recognition use has been growing in law enforcement as a means of surveillance and identification. According to Recode, even schools have been considering facial recognition software to keep kids safe against safety threats such as mass shootings. Some airports have even added facial scanning in partnership with U.S. Customs and Border Protection.
It's clear facial recognition isn't going anywhere and will only continue to grow. With conclusive research about its bias and massive worries over privacy, the question now becomes, what will it take for clearer regulations and guidelines?