Science

How to Create Socially Responsible Algorithms, According to A.I. Institute

Honesty is always a good policy.

by Kevin Litman-Navarro
Flickr / markus spiske

The Research institute AI Now published new recommendations for the responsible implementation of algorithms in the public sphere on Wednesday. Their advice is directed at a task force formed by the New York City Council in January that is studying government use of artificial intelligence.

AI Now’s report, Algorithmic Impact Assessments: Toward Accountable Automation in Public Agencies, outlines the need for transparency when it comes to deploying algorithms. Algorithms have a huge impact on our daily lives, but their impact sometimes goes unnoticed. Because they are baked into the infrastructure of social media and video platforms, for example, it’s easy to forget that programs often determine what content is pushed to internet users. It’s only when something goes wrong, like a conspiracy theory video reaching the top of YouTube’s trending list, that we scrutinize the automated decision procedures that shape online experiences.

And algorithms aren’t restricted to internet platforms. Government institutions have become increasingly reliant on algorithms, in domains ranging from education to criminal justice. In an ideal world, algorithms would remove human bias from tough decisions, like determining whether or not an inmate should be granted parole. In practice, however, algorithms are only as effective as the people who make them.

For example, an investigation by ProPublica demonstrated that risk-assessment algorithms used in courtrooms were racially biased. To make matters worse, many of the algorithms used in the public sector are privately owned, and some companies refuse to share the code underlying their software. That makes it impossible to understand why these so-called “black box” algorithms return certain results.

One potential solution offered by AI Now? Algorithmic Impact Assessments. These evaluations establish a norm of complete transparency, meaning that government agencies that are using algorithms would need to publicize when and how they are using them. “This requirement by itself would go a long way towards shedding light on which technologies are being deployed to serve the public, and where accountability research should be focused,” the report said.

A policy of openness surrounding algorithms would also open the door for citizens to scrutinize and protest their use. Would you want an algorithm giving you a risk-assessment score based on factors outside of your control, especially when that score could help determine if you go to jail? Maybe yes, maybe no. Either way, it’s important to know exactly what variables are being analyzed by the algorithm.

Additionally, AI Now recommends setting a legal standard for people to lobby against unfair algorithms.

For example, if an agency fails to disclose systems that reasonably fall within the scope of those making automated decisions, or if it allows vendors to make overboard trade secret claims and thus blocks meaningful system access, the public should have the chance to raise concerns with an agency oversight body, or directly in a court of law if the agency refused to rectify these problems after the public comment period.

The recommendations essentially boil down to one overarching mandate: If you’re using algorithms, don’t be shady about it.

Read the full report here.

Related Tags