Culture

AI experts are sounding the alarm about crime-prediction algorithms

They warn that pre-existing bias in the criminal system will only be further entrenched by "crime-predicting" technology.

Facial recognition. Biometric verification. A modern young man. The concept of a new face recognitio...
Shutterstock

For years the tech industry, alongside some academics, has attempted to make a case for using "crime predicting" algorithms that, according to its proponents, would make the world a safer place. But experts who study artificial intelligence (AI) warn that reliance on, or blind faith in, any sort of predictive algorithm will only worsen the existing racism that pervades the criminal justice system.

A public letter from more than 1,000 artificial intelligence experts from Harvard, MIT, Google, and Microsoft released this week wants to drive the point home. The experts addressed their letter to the Springer publishing company, urging it to cancel its plan to publish a paper in favor of using predictive algorithms for crime detection. The paper claims that the technology can predict the likelihood of an individual committing a crime with "80 percent accuracy" but experts say this technology will only help the "tech-to-prison-pipeline," Motherboard reports.

Halt the paper — The study, which is titled "A Deep Neural Network Model to Predict Criminality Using Image Processing," was expected to be published in Springer Nature — Research Book Series: Transactions on Computational Science and Computational Intelligence (not to be confused with the magazine, Nature, that Springer also owns. Springer has subsequently rejected the paper for publication.

What do the signees want? — The AI experts had three main points they wanted the publishing company to consider:

  • First, they asked the review committee to publicly rescind the offer to consider the paper, along with an explanation of the decision not to publish. Something it eventually agreed to do.
  • Second, the experts have asked Springer to issue a statement of condemnation for any kind of crime predictive algorithm and acknowledge "their role in incentivizing such harmful scholarship in the past." That's yet to happen.
  • Finally, the experts have implored other publishers to avoid publishing content that lends baseless credence to a technology that can harm racial minorities.

What the letter says — The full letter is a lengthy, detailed, and resource-replete read, packed with paper and book titles and authors' names pertaining to various research projects into crime predictive algorithms, ethical issues, legal quagmires, racial inequality, and more. Particularly pertinent is the signees' attempt to bring Springer's attention to the ongoing Black Lives Matter protests and the issue of mass incarceration in the United States. They write:

At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world.

Springer bowing to pressure is the right move. Even stronger would be it speaking out against algorithmic policing, particularly facial recognition. We're optimistic it'll still do so, and that it'll encourage other publishers to carefully consider giving airtime to these thoroughly debunked — and dangerous — ideas.