South Korea just announced that it intends to install 3,000 AI-powered cameras around the country’s capital, Seoul. As if being constantly recorded by security cameras wasn’t dystopian enough, now some of those cameras will be able to think for themselves, too.
This could be very dangerous — The problem with using AI to detect crimes is that it’s only as accurate as its programmers allow it to be. Many times this is AI’s downfall: its inherent biases.
Maybe there’s a slight pattern of crimes being committed by people wearing black hats — does that mean everyone wearing a black hat will be flagged as a potential criminal? Or, more to the point, what if the AI’s programming favors a particular race or gender?
AI-in-progress — Right now the AI that’s set to power the cameras is still studying up on something like 20,000 court documents and pieces of crime footage. The goal is for the AI cameras to be able to deduce the likelihood that a crime will take place at a given place and time. The cameras will reportedly be able to compare past crime patterns to real-time footage. A finished version of the AI software is expected to be finished by 2022.
Complex discerning algorithms — According to the Electronics and Telecommunications Research Institute (ERTI) in Seoul, the cameras will be able to process the location, time, and behavior patterns of passerby to measure the probability of crimes taking place. They’ll also be able to detect details as minute as what each person is wearing and carrying with them.
It’s unclear how successful South Korea’s project will be, but maybe we shouldn’t be fighting crime rates with somewhat experimental software. Just a thought.