Science

This Kitten Highlights the A.I. Flaw Holding Autonomous Cars Back

Kittens are kittens, not desktop computers. 

Unsplash / Dariusz Sankowski

A lot of autonomous vehicle experts like Tesla CEO Elon Musk are aggressively predicting autonomous cars will be mainstream in less than a decade’s time. It doesn’t take a genius to note this is easier said than done, as making a car drive on its own without smashing into everything and everyone requires sophisticated artificial intelligence that can understand the surrounding environment as well or better than humans. And now, new research conducted by the nonprofit OpenAI — ironically backed by Musk himself — highlights a potential wrench stuck in the gears.

Here’s the problem: Last week, a team of scientists from the University of Illinois at Urbana Champaign made a claim that it would be difficult for malicious parties to actively attempt to fool an autonomous car into misidentifying the objects its visual system picked up. That paper basically argued that machine-learning algorithms in a reliable neural network would use the car’s cameras to see an object from multiple angles and distances and so get a complete sense of what it’s looking at.

A hacker trying to interfere in the car’s ability to identify an object wouldn’t have to simply tamper with one image, but an entire host of them — taken at different angles, at different distances, from different speeds. Tricking a car into thinking a stop sign is, say, a dog should be incredibly difficult when so many images have to be altered.

The OpenAI researchers, however, found that these same algorithms are, in fact, at risk of failing to properly identify objects — because an issue could spring up that would fool an algorithm into mislabeling an entire collection of images all at once.

Since this is the internet, the researchers used a picture of a kitten to make their point. Just look at the video here:

As the bar graph on the right shows, the system believes the kitten is a monitor or desktop computer. Essentially, a small issue causes the system to mislabel the image no matter what the angle or zoom on the image.

A more severe issue can be created based off an algorithm optimization method called projected gradient descent, which is supposed to identify flaws in the image before the algorithm is tricked into misclassifying it. However, the OpenAI researchers demonstrate that this method can create a problem where the algorithm’s classification scale is messed up before it even begins to collect and analyze images. It’s a systematic error that creates problems not in certain data points, but in the entire process of collecting that data. This issue can create some of the most serious problems for the algorithm:

One has to imagine, considering how creative hackers tend to get, it wouldn’t be impossible for someone with malicious intensions to insert a flaw like this into autonomous car’s algorithm. Perhaps self-driving cars will need more than 10 years of testing — and some seriously robust antivirus software — before they really become a ubiquitous form of personal transportation.