Tech

Prisons are using Amazon Transcribe and AI to monitor inmates’ phone calls

A new report sheds light on companies like LEO Technologies, whose AI-scanning audio software employs Amazon speech-to-text recognition.

SANTA ANA, CA - MAY 24: Inmates make collect phone calls at the Sheriff's Central Men's Jail in 2011...
MediaNews Group/Orange County Register via Getty Images/MediaNews Group/Getty Images

A new investigation courtesy of the Thomson Reuters Foundation found that dozens of jails and prisons across the U.S. are shelling out hundreds of thousands of dollars for phone audio monitoring systems that are often based on software like Amazon’s speech-to-text transcription service, Transcribe. Researchers found that since at least 2019, corrections facilities have utilized programs like LEO Technologies’ AI-enabled speech monitor, Verus, which is marketed as an inmate safety strategy to better flag conversations including keywords like “suicide” and “depression.”

A promotional video on LEO’s Verus page includes real audio clips of prisoners discussing suicidal ideations and self-harm and advertises the AI system as a better means of protecting inmates. The Reuters Foundation, however, unsurprisingly uncovered numerous instances of law enforcement using the AI monitor to keep tabs on suspected crimes, legal discussions, and to even help stave off potential lawsuits against corrections facilities.

South_agency/E+/Getty Images

Big fans on Capitol Hill — The news comes only a few months after the House of Representatives called for a study on the surveillance tech’s potential uses in hopes of expanding its funding for the Department of Justice. In the meantime, its uses are already being praised by sheriff’s offices, who reportedly often Verus to flag conversations containing Spanish keywords like “abogado” (lawyer) and “mara,” which can translate to “gang.” Or, you know, just “friends.”

Seeing lawmakers jump at the chance to invest in new panopticon tech is nothing new, unfortunately. Time and time again we’ve been shown that cops sure do love their shiny new toys.

Opening the floodgates — Critics and privacy advocates are sounding the alarm on the increasing popularity of such services, saying that the technology is already being utilized in areas far beyond the legally established reason of “working to ensure safety and fight crime,” as the Reuters Foundation explains.

Stephanie Krent, a staff attorney for Columbia University’s Knight First Amendment Institute, explained to the Reuters Foundation that rolling out AI surveillance like Verus often creates an “avalanche effect” for expansive usages. “Once a technology like this is implemented it's hard to let it go,” she said, adding earlier that, “Protecting the interest and reputations of the people who run the jail is not a legitimate goal.”

Inherent biases — AI recognition software has also long faced criticism for its often inherently biased algorithmic designs, along with its suspect accuracy. “AI isn't predicting the future, it's just engaging in profiling,” Albert Fox Cahn, executive director of the Surveillance Technology Overview Project (STOP). “This software is automating discrimination against BIPOC people in prison. Until we ban this tech, it'll continue to undermine public safety, sending officers on wild goose chases, treating innocuous speech as a threat."