ACAB

Canadian cops are using more predictive policing algorithms than ever

New collaborative research shows algorithmic policing software is being used all over Canada. It poses a significant risk to personal freedoms if left unchecked.

ChristiLaLiberte/iStock Unreleased/Getty Images

It turns out the U.S. isn’t the only place where technology has been weaponized by the police. Law enforcement officers across Canada are utilizing predictive algorithms for crime investigations with increasing frequency, a new study finds.

The investigation was carried out by the University of Toronto’s International Human Rights Program (IHRP) and Citizen Lab, which conducts research about human rights and global security. Their research has been published under the title To Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in Canada.

The report’s findings are concerning, to say the least. Researchers found that law enforcement in both big cities and more suburban and rural areas in Canada are, indeed, using algorithmic surveillance technologies or are planning to do so in the near future. And, as researchers point out, the use of these technologies comes with inherent risk to citizens’ human rights.

Three types of tech — Not all algorithms are created equally; the researchers here have broken down law enforcement tech into three primary categories — all of which, it turns out, are being used by Canadian cops in some form.

Location-focused algorithmic technology is, perhaps, the most commonly known subset being used today. These use correlations in historical police data to predict crime based on geographic location. Similarly, person-focused algorithmic technology is used to analyze police records and identify people who are more “likely” to be involved in criminal activity.

The third category — algorithmic surveillance technology — isn’t necessarily predictive. Instead, this technology is being used to automate surveillance using “sophisticated” technology like computer learning and artificial intelligence.

All over the place — Researchers found that law enforcement offices in all corners of Canada are using these three types of technology.

The Saskatoon Police Service develops algorithms to predict when young people might go missing, in collaboration with the Saskatchewan Police Predictive Analysis Lab. In Vancouver, police officers use a machine-learning tool called GeoDASH to predict where break-and-enter crimes could happen. Police in Calgary use Palantir’s Gotham software to identify and visualize links between people who interact with police.

These instances are only the tip of the iceberg, it seems. The Calgary Police Service, for example, is also using algorithmic social network analysis to find suspected criminals. The Ontario Provincial Police and Waterloo Regional Police Service are likely intercepting private messages in chat rooms through surveillance technology called the “ICAC Child On-line Protection System.”

The list goes on.

Surprise: bad news for human rights — Besides confirming the existence of these technologies, the new research also highlights concerns brought up by their increasing prevalence. Algorithms can be helpful in automating and predicting, sure — but their dangers to the general public far outweigh their virtues.

The researchers point out that, on a very fundamental level, the use of algorithmic surveillance and prediction technologies threatens basic rights to freedom of expression and peaceful expression. The repurposing of historical data to “predict” the future, if left unchecked, quickly becomes damaging to personal freedoms.

Furthermore, none of these algorithms can be used with a high degree of certainty. They’re educated guesses at best. And those guesses, as we’ve seen time and time again, more often than not only uphold and deepen existing racism in the criminal justice system. It’s simple: if you analyze racist data, you’re bound to end up with racist “predictions.”

If you think these findings sound incredibly bleak, you’re not alone. Researchers feel the same way — which is why they conclude with some recommendations to rectify these dangers. Policy evolution and expansion is the number-one way researchers say we can mitigate the harm posed by algorithmic surveillance. Increased transparency about which technologies are being utilized by law enforcement agencies would also help; we shouldn’t need a full research investigation to understand what technology is in use.

Technological innovation always comes with the potential for misuse — algorithmic crime-predicting and surveillance is no exception. This isn’t going to get better unless some major changes are made to our frameworks.