Ghost in the shell

Spooky video shows self-driving cars being tricked by holograms

This could be a concerning development for those who are working on autonomous vehicles.

One of the major concerns surrounding the development of driverless cars is that people might be able to hack into them remotely and take control. Tech journalists have been investigating ways this could be done for years, and car manufacturers are working hard to make sure their vehicles won't let this happen. It turns out you might not need to go very high tech to stop a driverless car, though, as researchers in Israel recently managed to stop one by simply projecting "phantoms" onto the road.

Researchers from Ben-Gurion University of the Negev's (BGU) Cyber Security Research Center in Israel found that both semi-autonomous and fully autonomous cars stopped when they detected what they thought were humans in the street but were actually projections. They also projected a street sign onto a tree and fake lane markers onto the street to trick the cars. The research was published by the International Association for Cryptologic Research.

Cyber@bgu

Ben Nassi, lead author and a Ph.D. student, said in a statement that these types of issues are being overlooked by the companies that are developing these types of vehicles.

"This type of attack is currently not being taken into consideration by the automobile industry. These are not bugs or poor coding errors but fundamental flaws in object detectors that are not trained to distinguish between real and fake objects and use feature matching to detect visual objects," Nassi said.

These researchers are now using a neural network to develop a system that can detect when something is a projected image and when it isn't. They say the current systems, which are supposed to be able to detect when something is a 2D image, appear to be operating under a "better safe than sorry" policy that could become problematic.

This isn't the first time a semi-autonomous or autonomous vehicle has been tricked into thinking an object was there when it wasn't. Researchers from the University of South Carolina were able to trick a Tesla that was on autopilot in a similar way in 2016.

As we reported last year, research shows there are reasons to be concerned about driverless cars being hacked—even if it's done in a low tech fashion. Researchers found that even hacking a small number of vehicles in a city could have major impacts on traffic and create dangerous driving conditions. Not only are the people near the hacked vehicle in danger, but there'd be a ripple effect that could cause terrible traffic beyond the immediate area.

See also: 100,000 self-driving riders can't be wrong: Las Vegas experiment looks like a success

It will be difficult for car manufacturers to design autonomous driving systems that can't ever be hacked or tricked in the ways that these researchers in Israel were able to trick them. That said, roughly 40,000 people die in car accidents per year with us driving cars ourselves, so we may find autonomous vehicles end up being safer. Furthermore, some autonomous vehicle experts argue it's easier to hack a car someone is driving than a driverless car.

Abstract
The absence of deployed vehicular communication systems, which prevents the advanced driving assistance systems (ADASs) and autopilots of semi/fully autonomous cars to validate their virtual perception regarding the physical environment surrounding the car with a third party, has been exploited in various attacks suggested by researchers. Since the application of these attacks comes with a cost (exposure of the attacker’s identity), the delicate exposure vs. application balance has held, and attacks of this kind have not yet been encountered in the wild. In this paper, we investigate a new perceptual challenge that causes the ADASs and autopilots of semi/fully autonomous to consider depthless objects (phantoms) as real. We show how attackers can exploit this perceptual challenge to apply phantom attacks and change the abovementioned balance, without the need to physically approach the attack scene, by projecting a phantom via a drone equipped with a portable projector or by presenting a phantom on a hacked digital billboard that faces the Internet and is located near roads. We show that the car industry has not considered this type of attack by demonstrating the attack on today’s most advanced ADAS and autopilot technologies: Mobileye 630 PRO and the Tesla Model X, HW 2.5; our experiments show that when presented with various phantoms, a car’s ADAS or autopilot considers the phantoms as real objects, causing these systems to trigger the brakes, steer into the lane of oncoming traffic, and issue notifications about fake road signs. In order to mitigate this attack, we present a model that analyzes a detected object’s context, surface, and reflected light, which is capable of detecting phantoms with 0.99 AUC. Finally, we explain why the deployment of vehicular communication systems might reduce attackers’ opportunities to apply phantom attacks but won’t eliminate them
Related Tags