Hackers Will Soon Be Trying to Send Your Driverless Car Off a Cliff

A good 'spoof' could 'stop Teslas in their tracks.'

Getty Images / Spencer Platt

In late 2016, reports flooded in from Moscow about a strange disturbance near the Kremlin. When passing by the fortified complex at the cold heart of the Russian government, drivers found the GPS systems in their cars had been suddenly spoofed.

In December, CNN confirmed that instead of showing the cars where they really were — cruising along the Moskva River — the GPS suddenly insisted the cars were 20 miles away, at the Vnukovo International Airport.

When this news reached Todd Humphreys, halfway across the world in Austin, Texas, he couldn’t help but feel vindicated. The foremost mind in technological trickery (the foremost law-abiding mind, anyway), Humphreys was certainly unsettled by Russia’s actions. But after almost a decade testing the potential for devious transit takeovers in the lab, he tells Inverse he was excited to have finally found a “spoofing case documented in the wild.”

As head of the Radionavigation Laboratory at the University of Texas, Humphreys is the world’s leading expert in spoofing and jamming. Though his job sounds downright musical, it’s actually defined by pencils scratching paper and wheels screeching on pavement. That’s because Humphreys spends most of his time thinking up worst-case scenarios of planes, ships, and automobiles hacked by nefarious forces. He says that in an era where partially automated vehicles are already on the road and annual sales of fully autonomous cars are expected to hit 12 million by 2035, what happened at the Kremlin might soon seem quaint.

See also: This Artist Explains How to “Trap” a Self-Driving Car With Witchcraft

Footage of Todd Humphreys's homemade spoofer. “The world seems upside down when this little blue dot you’ve come to trust goes traipsing off without you," he tells *Inverse.* "I saw opening up all of the problems that could happen if someone with my know how didn’t have my restraint.” 

Todd Humphreys

Driverless Vehicles are Uniquely Vulnerable

“The self-driving car doesn’t have ESP,” Humphreys says, “it gets information from its sensors. It determines its locations from its sensors, if there’s a crash coming up ahead, if the light is green or red — from its sensors.” Right now, a hacker could still send a confusing signal to a car, interrupting the real data coming from satellites and show it somewhere else. But drivers are still in control and they typically know where they are, regardless of what a GPS says.

In an autonomous car, however, if the operating systems are sent bad data, the car can make the decisions itself, allowing a hacker to remotely send a vehicle off the road or drive it down a different course.

Though Humphreys is quick to assure people he thinks autonomous cars are inevitable and exciting, he says skepticism about driverless vehicles is far from misguided. In fact, these doomsday scenarios are totally plausible. That’s why his lab is working on technology that can spoof-proof a vehicle. But, Humphreys says, many autonomous car manufacturers have been reluctant to pay up and some, like Tesla, have actually made safety modifications to their car that could compromise security.

Last summer a Tesla test car got confused, causing a deadly crash. It wasn’t caused by hacking, but an investigation into the tragedy did shape Humphreys’s work on intentional attacks. The accident report showed that, to the car’s front-facing cameras, a passing white truck was indistinguishable from the bright Florida sky, so the autonomous car careened into the nearby vehicle. The car’s radar had actually recognized the threat, Humphreys says, but it was overruled by the blinded front-facing camera.

To rectify this, Tesla has reportedly changed its system. Where the car’s front-facing camera and its radar used to have to agree in order to trigger a change in direction, reports indicate now only one system has to identify a problem in order to change the car’s course. Humphreys understands why such a change makes sense in context, but he says any move to reduce redundancy actually makes a car even more susceptible to hacking.

“Now I can just stop the car by just spoofing the radar, I don’t have to spoof the radar and the camera,” Humphreys says. “It’s an example of where, when you fix one problem… you perhaps make it less resistant to intentional attack. I’m fairly certain with a radar-spoofing device… we could stand by the side of the road and watch for Teslas and stop them in their tracks.”

In *Silicon Valley*, a character is abducted by a driverless car headed for the Pacific Ocean.

How to Spoof-Proof a Driverless Car

It seems the public has picked up on this potential for disaster. A 2016 AAA survey indicating as many as 75 percent of Americans are scared to ride in an autonomous vehicle. “Everybody’s primary fear is they’re traveling down the road in an autonomous car here and somehow hacks them remotely and takes them off to some far-off place and locks the door,” Humphreys says. This particular concern was comically played out in the first season of Silicon Valley and, with fewer laughs, in a real-life recall of Jeep Cherokees that researchers proved in 2015 could be hacked into and remote controlled by hackers.

But the vulnerability of GPS systems — which Humphreys gave a downright prescient TED talk about all the way back in 2012 — isn’t the only chink in a driverless car’s armor. Many autonomous vehicles are also getting data from Lidar, a laser-based system. Though Lidar would be difficult to hack remotely, Humphreys believes a truly determined spoofer could easily beam a misleading laser at a car and swiftly disrupt this crucial system. Meanwhile, many of the cars have a Dedicated Short Range Communications system, known allows them to send signals to each other about their position and velocity. It’s encrypted, but it’s not foolproof. Humphreys says a spoofer could record an outgoing DSRC message and play it back to the cars, disabling their inter-vehicle communication and wreaking havoc.

To combat these emerging problems, Humphreys and his team at the University of Texas are working on a variety of countermeasures. Many require the redundancy that Tesla eliminated, as no piece of technology is enough to prevent an attack alone. Some of the efforts at hand center on improving cryptography and developing better signal-detection detectors, like sensors that can determine from where a signal is actually emanating.

The Russians Are At It Again

Still, when asked if manufacturers are putting up the money to adopt anti-spoofing technology, Humphreys pauses. “Well, no,” he says. Car companies have told him that because autonomous cars are still in the early stages, there are more pressing matters to attend to. They’re not thinking yet about hacking, but instead are focused on not hitting bikers… or small children.

Still, Humphreys is confident people will come to understand the risks technologically sophisticated ne’er do wells pose to self-driving cars, autonomous planes, and other automated devices. But he’s not sure how long this will take.

“The actual response in the commercial world has been very low,” he says, “mostly because they say, ‘Oh, that’s just Todd Humphreys in the laboratory.’” But, he says, a second event has cropped up in the “wild,” courtesy of the Russians and their misdeeds may soon be enough to tip the scales in favor of anti-spoofing devices.

In June, cargo ships in the Black Sea suddenly lost track of each other. When in port, every ship beams out a message, positioning itself on its neighbor’s radar. Should sudden fog sweep in or visibility disappear, ships can continue to track one another and avoid devastating collisions. But, captains and crews reported at the time, the radar positions of their ships and dozens of others in the eastern part of the Black Sea artificially shifted, rendering the radars useless and endangering everyone on the water.

What exactly occurred remains unknown, but the U.S. Maritime Administration issued a warning to ships sailing in Russian waters: “The nature of the incident is reported as GPS interference,” it said. “Exercise caution…”

See also: When Will We Trust Autonomous Cars?

Related Tags