Self-driving cars are coming, with 2020 often given as the year to expect the arrival of fully autonomous vehicles on the road. But their success will depend on making sure the car’s sensors are good enough to see and react to everything around them.

Current vehicles equipped with forms of self-driving technology like Tesla’s Autopilot rely on what’s known as Light Detection and Ranging sensors, or LIDAR. This is camera shoots out a beam of light and measures how long it takes that light to get from what it’s looking at back to the sensor, sort of like sonar.

“The problem is that light moves really fast, so in one nanosecond light has traveled one foot,” Achuta Kadambi, a PhD student at the Massachusetts Institute of Technology, tells Inverse in a phone call.

Those kinds of speeds make it difficult for the sensor to say with precision exactly how long it took the light to travel out and bounce back. It gets even fuzzier the farther away an object is, and the only way to solve it is to make the system powerful enough to distinguish different light arriving every fraction of a nanosecond.

“So that means if you want path-length resolution that’s better than one foot, then my sensor needs to have a time resolution that’s better than one billionth of a second,” says Kadambi. “That’s asking a lot.”

In a paper published in IEEE Access last week, Achuta and Dr. Ramesh Raskar describe how they’ve figured out a way to overcome what they call “the curse of light-speed.”

Instead of creating a powerful enough camera to capture all of the oscillations of lightwaves, they filter the light through a fiber-optic material to make it easier to measure.

“We’re coming up with a sophisticated way to filter the light before it hits the detector,” says Kadambi. “That way we can use ordinary detectors but obtain the path-length resolution of extraordinary systems.”

This can all get a bit technical, but here’s one way the system can work: Let’s say a self-driving car fires of a beam that pulses a billion times per second. While some bounce back to the car at that rate, others are very slightly affected by the surrounding environment so they return at 999,999,999 pulses each second.

That would be a nearly impossible difference for a computer system to detect — except the interaction of those two beams is the equivalent of their pulses canceling each other out, leaving just one pulse each second. That’s much easier for sensors to pick up.

It’s those kinds of shortcuts that can make self-driving tech cheaper, easier, and hopefully more powerful. One potential benefit of this setup is it would allow cars to see into the distance even during foggy conditions, where existing LIDAR systems struggle.

Today’s sensors cost roughly $75,000. That cost likely needs to come down to make self-driving cars affordable to the average car driver — well, car user, we suppose the term will have to become once we’re no longer doing the driver — but trying to make LIDAR work better than it does now could make them still more expensive.

Utilizing Achuta and Raskar’s research, autonomous vehicles could be retrofitted with the material they used to increase the resolution of the cameras already in the cars. This would be a cost-effective solution to make truly self-driving cars a reality.

You've read that, now watch this: "Loop Nyc's Autonomous Car System Would Add 24 Miles Of Parks To City"