Over 30,000 people in the U.S. die every year from motor vehicle accidents. There’s a big hope that autonomous vehicles, which have made substantial technological progress over the last few years, could be the key to drive this number down. Unfortunately, driverless cars are on the brink of a massive moral pothole: When faced with a problematic situation, should they act in the best interest of society or their occupants? And, putting aside the Trolley Problem for a moment, can they reasonably be expected to navigate smartly arounds floods, trees, people, deer, and every other damn thing? Not if they’re simply programmed to drive.
A human being might not run into any of these situations in their lifetime, but that doesn’t mean they wouldn’t intuit a reaction. A driverless car, on the other hand, would likely fail. In the words of Gill Pratt, the former head of DARPA’s Robotic Challenge and current Toyota employee: “We are a long way from the finish line for autonomous driving cars.” In other words, the race to bring a driverless car to market is ninety percent over, but the last ten percent is up a stunningly steep incline.
This is why the world’s largest car manufacturer, Toyota, wants to make autonomous cars behave and react more like humans. They want to integrate motor vehicles with A.I. technologies that can learn and adapt to changing conditions on the fly, keeping passengers inside and people outside safe. The idea isn’t to create an automaton, but a sort of teammate for drivers.
At the Consumer Electronics Show on Tuesday, the company announced the launch of its new Toyota Research Institute (TRI), a new initiative designed specifically to develop A.I. and robot systems that can be integrated into autonomous cars and make them safer and more dependable, as well as create hardware that makes disabled, sick, and elderly individuals mobile both inside and outside the home. Pratt, the newly announced CEO of TRI, has brought together a team of roboticists and A.I. researchers from across the country and built partnerships with research teams at MIT and Stanford. The new roster of Toyota talent befits a $1-billion investment: There’s former DARPA program manager Eric Krotkov, former head of Google Robotics James Kuffner, and MIT roboticist Russ Tedrake — just to name a few.
At the announcement on Monday, Pratt reminded the audience that “society tolerates a lot of human error.” While we can forgive a driver for making poor decisions, it would be unacceptable to allow a machine to commit the same mistakes. To limit machine error, TRI is essentially folding their work out on two fronts: Stanford will be working to develop systems that can safely react to unanticipated and untested events, and MIT will be working to create A.I. systems that can explain their decision-making in a way that informs programmers how to build a machine that acts based on a logic and evidence.
“We have to understand how that system is going to perform” using very complicated systems, says Tedrake. He believes the more is understood about how these systems operate, the better able we can build a car that more closely emulates the ‘teammates’ relationship Toyota is striving for.
These plans for TRI also fall in line with Toyota’s new initiatives to make cloud-based technology more ubiquitous in its cars and keep them connected to cloud-based servers that help collect and analyze data related to safety and emergency response needs.
Cloud-based systems may also do a lot in helping the A.I.-fitted cars of the future all learn together based on the experiences of just a few. If one car learns how to, say, navigate past a massive spill of Star Wars legos on the highway, the rest of the cars would be prepared for that less than inevitable scenario. The car, in essence, becomes less important than the intelligence represented by the cars and that latter helper becomes our copilot.