Computer systems designed to recognize images aren’t that smart. Researchers have been able to create a set of glasses that makes an A.I. system think Reese Witherspoon is Russel Crowe, while others have been able to fool systems into seeing everyday household objects in psychedelic-looking patterns.
A new report published by BBC Future on Monday highlights the risks around these flaws. It may sound like great fun to convince a machine that a spectacled actress is, in fact, the man that played Maximus in Gladiator, but it’s bad news for self-driving cars that depend on such algorithms to navigate.
“If spam gets through or a few emails get blocked, it’s not the end of the word,” Daniel Lowd, assistant professor of computer and information science at the University of Oregon, told the BBC. “On the other hand, if you’re relying on the vision system in a self-driving car to know where to go and not crash into anything, then the stakes are much higher.”
BBC uses the example of a self-driving car algorithm that’s been trained to understand what a stop sign looks like. The computer will build up an idea of what to look out for, but unlike the way a child learns what something looks like, computers will tend to look for very specific details rather than broad ideas. A child will remember the general shape of the sign, but a computer will remember that specific pixels appeared in certain places with a stop sign. Messing with those points could mean a car not stopping in time.
That dependence on specific pixels has led to some bizarre outcomes, where a computer was fooled by the University of Wyoming into thinking these patterns were common images:
Another team from France and Switzerland was able to trick a system into seeing a fox as a squirrel.
What’s a designer to do? More rigorous testing could discover flaws, which in turn would help make the systems more resilient. Something needs to be done in the long term - mistaking Witherspoon for Crowe just isn’t okay.