Science

Smart Speakers Can Be Hacked with Sounds, Say Researchers Out to Stop It

A.I. isn't as all-knowing as you would think.

Flickr / windyjonas

What if we told you that a hacker could give your Amazon Echo a command without you even noticing — or even having to do any hacking as we normally think of it?

Moustafa Alzantot, a computer science Ph.D. candidate at the University of California, Los Angeles says it’s theoretically possible for a malicious actor to send a particular sound or signal that would usually go completely unnoticed by humans but cause the A.I.’s deep learning algorithms to falter.

“One example of [an attack] would be controlling your home device, without you knowing what’s happening,” Alzantot tells Inverse. “If you’re playing some music on the radio and you have an Echo sitting in your room. If a malicious actor is able to broadcast a crafted audio or music signal such that the Echo will interpret it as a command, this would allow the attacker to say, unlock a door, or purchase something.”

It’s an attack known as an adversarial example, and it’s what Alzantot and the rest of his team aim to stop, as described in their paper recently presented at the NIPS 2017 Machine Deception workshop.

A.I. is no different than the human intelligence that created it in the first place: It has its flaws. Computer science researchers have figured out ways to completely fool these systems by slightly altering pixels in a photo or adding faint noises to audio files. These minute tweaks are completely undetectable by humans, but completely alters what an A.I. hears or sees.

“Theses algorithms are designed to attempt to classify what was said so they can act upon it,” Mani Srivastava, a computer scientist at UCLA, tells Inverse. “We try to subvert the process by manipulating the input in a manner that a human nearby hears ‘no’ but the machine hears ‘yes’. So you can force the algorithm to interpret the command differently than what was said.”

The most common adversarial examples are those relating to image classification algorithms, or tweaking a photo of a dog ever so slightly to make the A.I. think it’s something completely different. Alzantot and Srivastava’s research have pointed out that speech recognition algorithms are also susceptible to these types of attacks.

Adversarial attacks on speech commands: A malicious attacker adds small noises to the audio such that it is misclassified by the speech recognition model but does not change human perception.

Moustafa Alzantot/UCLA

In the paper, the group used a standard speech classification system found in Google’s open-source library, TensorFlow. Their system was tasked to classify one-word commands, so it would listen to an audio file and try to label it by the word that was said in the file.

They then coded another algorithm to try and trick the TensorFlow system using adversarial examples. This system was able to fool the speech classification A.I. 87 percent of the time using what is known as a black box attack, in which the algorithm doesn’t even have to know anything about the design of what it is attacking.

“There are two ways to mount these kinds of attacks,” explains Srivastava. “One is when, I as the adversary know everything about the receiving system, so I can now make up a strategy to exploit that knowledge, this is a white box attack. Our algorithm does not require knowing the architecture of the victim model, making it a black box attack.”

Clearly black box attacks are less effective, but they’re also what would most likely be used in a real-life attack. The UCLA group was able to achieve such a high success rate of 87 percent even when they didn’t tailor their attack to exploit weaknesses in their models. A white box attack would be all the more effective at messing with this type of A.I. However, virtual assistants like Amazon’s Alexa aren’t the only things that could be exploited using adversarial examples.

“Machines which are relying on making some sort of an inference from sound could be fooled,” said Srivastava. “Obviously, the Amazon Echo and such is one example, but there are a lot of other things where sound is used to make inferences about the world. You have sensors linked to alarm systems that take in sound.”

The realization that artificial intelligence systems that take in audio cues are also susceptible to adversarial examples is a step further in realizing how powerful these attacks are. While the group was not able to pull off a broadcasted attack like the one Alzantot described, their future work will revolve around seeing how feasible that is.

While this research only tested limited voice commands and forms of attacks, it highlighted a possible venerability in a large portion of consumer tech. This acts as a stepping stone for further research in defending against adversarial examples and teaching A.I. how to tell them apart.

Related Tags