Giving self-driving cars ‘memories’ of past trips could solve safety issues

A new study from Cornell University asks whether or not remembering past drives would make autonomous vehicles less dangerous.

Ryan Young/Cornell University

The rise of Tesla’s Autopilot system has brought significant attention to the safety of autonomous driving systems, both in the media and in regulatory oversight groups. The general consensus seems to be that autonomous driving systems just aren’t safe enough to be out and about on public roads yet — though that certainly hasn’t stopped Tesla fanboys from using Autopilot, fatal as it may be.

Researchers at Cornell’s Ann S. Bowers College of Computing and Informational Science, along with help from Cornell’s School of Engineering, are working on three concurrent research papers that seek to provide autonomous vehicles with better “memories” for use in future navigation. Allowing a self-driving car to examine past trips could very well make future trips safer for riders and pedestrians. Some sense of object permanence would improve self-driving in inclement weather, too.

Autonomous vehicles perceive the world around them and make decisions based on complex neural networks, but they don’t really change their behaviors based on past experiences.

“In reality, you rarely drive a route for the very first time,” co-author Katie Luo said. “Either you yourself or someone else has driven it before recently, so it seems only natural to collect that experience and utilize it.”

Hindsight is 20/20 — A group of Cornell doctoral students decided to test their memory theory by compiling data from a LiDAR-equipped vehicle driven around Ithaca. The car drove the same 18-kilometer loop 40 times over a year and a half over various terrains. They ended up with more than 600,000 “scenes” in total.

The researchers call their approach HINDSIGHT. It uses neural networks to describe objects as the vehicle passes them, compresses the descriptions, and then stores them on a virtual map.

That virtual map can then be referenced by any vehicle hooked up to the HINDSIGHT network, which essentially becomes a shared brain. It’s like a much more complex version of Google’s crowdsourced Maps information.

By tapping into this memory, self-driving vehicles can have some sense of what a road has looked like in the past — information that can be hugely useful when sensors become less reliable, like in heavy rain, snow, or fog.

Listen up, Elon — In order to make HINDSIGHT actually useful to existing autonomous driving systems, the researchers have made it possible to add it to any autonomous vehicle equipped with a LiDAR array. That means just about any self-driving automotive tech company would be able to use it — except for Tesla. Musk has called LiDAR a “crutch” and hasn’t implemented the technology into any Tesla vehicles.

Despite the many accidents caused by Tesla’s Full Self-Driving beta software, Tesla continues trying to forge its own path in the autonomous vehicle industry. That’s not likely to change any time soon, given that Musk is at the helm.

The first of the three papers on this research was presented by its lead author, Yurong You, at the International Conference on Learning Representations in April. The other two are being presented at the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition this month.