In a weekend presentation at the Conference on Computer Vision and Pattern Recognition, the head of Tesla’s artificial intelligence team showed off the massive supercomputer the automaker uses to process data for its self-driving project.
Ever the contrarian, Tesla has bucked accepted wisdom in the field of autonomous driving that a suite of expensive sensors, like Lidar, are necessary for a computer to achieve parity with a human driver.
The company instead outfits its electric cars with standard cameras and collects data from drivers’ cars, feeding it to centralized computers with the hope its artificial intelligence software can study data and learn how to drive like a human (well, hopefully better than one, actually).
Computer vision — To process the immense amount of data coming from all of Tesla’s vehicles on the road today requires significant computing power. The supercomputer that Andrej Karpathy, Tesla’s AI chief, showed off is already in use today and is roughly the fifth most powerful supercomputer in the world.
In his interview, Karpathy acknowledges that vision-based automation will be much more difficult to achieve than incorporating Lidar, which uses lasers to create a three-dimensional map of the space around a car. Two-dimensional cameras struggle with certain scenarios, like seeing through fog. Karpathy argues, however, that accompanying Lidar with standard cameras would create a problem for the software of deciding which sensor to rely on when the two are in disagreement about what they’re seeing. Focusing on just one sensor, Tesla could perfect its computer vision to the point that more expensive Lidar sensors aren’t necessary.
Characteristically contrarian — Tesla’s Autopilot driver-assistance software and its nascent fully autonomous program are far from ready for primetime, frequently requiring intervention from drivers. Other companies like Google’s Waymo are already operating limited self-driving programs on streets today, and they rely on the full package of sensor technology to operate.
Elon Musk has repeatedly touted the capability of Tesla’s self-driving software, but its own engineering executives have privately conceded that the CEO is overstating how far along the technology has advanced. In videos by early beta testers of Full Self-Driving, the cars frequently get confused and need human intervention.
Better than a human — First announced in 2016, Tesla has repeatedly missed its self-imposed deadlines to achieve fully autonomous driving, demonstrating how tough a challenge the problem really is. Musk has admitted the technology probably needs to be 10 times better than a human driver before the public will accept it. But training a computer how to respond to the unpredictable behavior of pedestrians and human drivers is difficult.
Still, if Tesla is right and it can successfully create self-driving technology that relies on affordable cameras and its customer base feeding back data to ever-improving software programs, the company could end up leaps and bounds ahead of the competition. The company already has millions of cars on the roads today providing it with useful data, and its supercomputers give it the power to train quickly on that data. But self-driving is not a solved problem yet so it remains to be seen whether Tesla’s solution is the “right” answer.