Science

ImageNet Roulette: A.I. photo analyzer shows the problems with face scans

ImageNet Roulette has lifted the lid on one of the world's most important datasets.

by Mike Brown
Pixabay

When an A.I. looks at your picture, what does it see? If the results of ImageNet Roulette are anything to go by, you might not want to know the answer.

How ImageNet Roulette works:

The project, accessible through the website, enables users to upload any image, take one from their camera, or supply one from a web address. It then runs it through the ImageNet, a classification system with more than 20,000 categories, powered by 14 million labeled images.

The system is the most widely-cited A.I. training set in the world, according to Roulette co-creator and artist Trevor Paglen.

ImageNet, which first debuted in 2009, was regularly used by A.I. researchers to measure the speed of their algorithms in annual competitions. Quartz declared it “the data that transformed A.I. research.”

The project shines a light on how the A.I. behind these systems may classify you

In a world where smartphones scan faces to unlock and police scan faces to identify suspects, the project shines a light on how the A.I. behind these systems may classify you — which sometimes can lead to disturbing and racist results.

ImageNet Roulette, on display at an exhibition that opened this month, has led to some surprising answers. One journalist was classed as “biographer” based on her photo, another as a “dweeb.” The system described one bearded man as a “beard,” which it described as “a woman who accompanies a male in order to conceal his homosexuality.” One user found the system describe her, based purely on her image, as a “rape suspect.”

The system also produced racist descriptions. Stephen Bush, political editor at the New Statesman, shared the abhorrent terms that came up when he put his own image into the system:

Stephen Bush shares the results from ImageNet.

Twitter/stephenkb

Bush subsequently shared an his article from January, where he noted how people with darker skin sometimes have to turn their hands over to trigger automatic soap dispensers. This, Bush explains, is an example of where a system was released without testing for these flaws, leading to bias in algorithms:

Bush sums up this massive problem perfectly:

Any algorithm is only as a good as the assumptions that are fed into it, and if you aren’t careful, you can end up with troubling results.

Paglen and Kate Crawford, the creators of the ImageNet Roulette, explain on the project’s page that revealing these biases is part of the project’s goal:

ImageNet contains a number of problematic, offensive and bizarre categories - all drawn from WordNet. Some use misogynistic or racist terminology. Hence, the results ImageNet Roulette returns will also draw upon those categories. That is by design: we want to shed light on what happens when technical systems are trained on problematic training data. A.I. classifications of people are rarely made visible to the people being classified. ImageNet Roulette provides a glimpse into that process – and to show the ways things can go wrong.

The system is on display at the Fondazione Prada museum in Milan, Italy as part of the “Training Humans” exhibition, which runs from September 12 to February 24, 2020. Paglen will also use some of the images from ImageNet as part of the From Apple to Anomaly (Pictures and Words) exhibition in London’s Barbican Center, which runs from September 25 to February 16, 2020.

As face recognition plays a greater role in our lives, it’s a sharp reminder of the underlying assumptions that can be hiding inside the systems.

Related Tags