U.S. prisons are being asked by Congress to evaluate whether or not artificial intelligence can be used to analyze prisoner phone calls for any discussion that may be of concern.
The idea behind using such a system would be to save time for officials who might not have the resources to monitor every call. One such system in consideration, called Verus, can automatically transcribe calls and analyze them for flagged words or phrases. Its developer says the system has helped prevent at least a few suicides, but privacy advocates worry that AI’s poor reputation could mean inmates are wrongly flagged for suspicion.
No privacy — Prisoners have very little privacy while incarcerated as it is, and the new proposal also worries those on the outside who could feel their own privacy is being violated and that they’re unable to talk about sensitive topics.
Reuters reported on the proposal, which a House of Representatives panel brought forth as part of an $81 billion spending bill to fund the Department of Justice, including the Department of Corrections.
Accuracy — Artificial intelligence is notorious for incorrectly identifying people of color not just in photographs but also in audio. Technology that transcribes voice conversations is already quite bad, but a 2020 study by Stanford University and Georgetown University found that it’s particularly bad when applied to the voices of Black people. That risks getting them caught up in the legal system if the software records them as saying something they didn’t.
The high error rates for people of color can be attributed to several factors, like a lack of sufficient sample data used to train artificial intelligence programs or poor labeling of such data.
It’s not so theoretical to say that AI can lead to false arrests and higher incarceration. At least several public cases have occurred in which a Black person was falsely arrested after a facial recognition program incorrectly identified them in video footage. Black men are already six times more likely to be behind bars, and surveillance technology is often deployed in places with largely Black populations.
In its story, Reuters interviewed a woman engaged to a man who is currently incarcerated. Calls are already subject to monitoring, which is stressful enough. But they’re not always monitored, a new reality that AI would create. “It’s very unsettling — what if I say something wrong on a call?” said Heather Bollin, who worries about getting her fiancé in trouble with anything she says. “It could be misconstrued by this technology, and then he could be punished?”
Speech-to-text is not good enough to reliably be used in the criminal justice system, critics say.