Science

MIT Self-Driving Car Tech Sees Through Fog 58 Percent Better Than People

A new visual sensor system being developed at the Massachusetts Institute of Technology could solve a problem that has dogged the people developing autonomous driving tech: How does a car autonomously drive through misty rain and a pea soup fog?

MIT researchers revealed this week that their innovative system not only has the capability to see through fog at human levels: it’s much better. In fog so dense that human vision could penetrate only 36 centimeters (14.1 inches), the system was able to resolve images of objects and gauge their depth at a range of 57 centimeters (22.4 inches), good for a 58 percent improvement. The research could mark a big step toward building autonomous cars that won’t fail in bad the weather. In the past, fog has posed a big problem for self-driving tech, because it creates a sort of white noise in a car’s visual sensors.

A standard camera view of a foggy scene (left) contrasted with how the new system "sees" through fog.

MIT/YouTube

Here’s how they achieved that improvement: Researchers used what’s called a “time-of-flight” camera — it works by shooting bursts of laser light into the environment and measuring how long it takes for the lasers to reflect back at the camera — instead of a regular camera in an autonomous car. Generally, foggy conditions completely mess up those cameras, because light reflects off the misty water droplets in fog. These extra reflections mean that a time-of-flight camera operating in dense fog may be unable to distinguish between drops of water and solid objects.

Through a statistical analysis of how fog reflects light, the MIT team realized that the way fog scatters light something approach predictable. By training software to account for this variable, they effectively filtered out the visual white noise. “What’s nice about this is that it’s pretty simple,” lead researcher Guy Satat says of his work. “If you look at the computation and the method, it’s surprisingly not complex. We also don’t need any prior knowledge about the fog and its density, which helps it to work in a wide range of fog conditions.”

Satat and his fellow researchers will present their paper at the International Conference on Computational Photography May 4-6 at Carnegie Mellon University in Pittsburgh. With automakers like Ford aiming to have autonomous cars on the road by 2020, developments like this represent a huge step toward achieving that goal.

“We’re dealing with realistic fog, which is dense, dynamic, and heterogeneous. It is constantly moving and changing, with patches of denser or less-dense fog,” Satat said. “Other methods are not designed to cope with such realistic scenarios.”

Related Tags