Autonomous car development has a cooperation problem. That’s according to Hava Siegelmann, program manager of the Defense Advanced Research Projects Agency (or DARPA), who claimed that the companies she’s spoken to are lukewarm about working together on making A.I. safer.
“The next program that I’ve been working on is about safety of A.I. […] the companies that I’ve contacted that are working on drones and self-driving cars, said they are actually not interested to work together on safety of A.I.,” Siegelmann said in a panel discussion at the Human-Level Artificial Intelligence conference organized by GoodAI in Prague, Czech Republic, on Friday. The comments come as automakers ramp up their autonomy efforts, with Waymo reaching eight million miles this summer, but questions have lingered around the best way to approach safety. In March, 49-year-old Elaine Herzberg was the first person killed by an autonomous car. But while automakers have shown enthusiasm for on-road communications technologies like 5G, they’ve shown less interest in communicating about development.
Siegelmann is not the first to highlight this issue. Ralf Speth, CEO of Waymo partner Jaguar Land Rover, said at Beijing’s Auto Show in April that “we need to make sure that we get the right environment. No car company can do it on its own anymore so government, academia and the industry across sectors must really work together […] if we work together, also with the right cybersecurity system, then autonomous can be the future technology.”
Following the March incident, a Bloomberg op-ed argued that the industry needed to change its attitudes toward development. The safety-obsessed aviation industry, writer David Fickling noted, shares information about crashes to help each other out. The collaboration is a big success, with passengers flying around seven trillion kilometers annually without almost any incidents, but automakers appear reluctant to share data amid the race to get to autonomous cars first.
Siegelmann is largely positive about A.I. as a whole, going on to say that “I think the benefit would be great, and I think if we educate people about it and everyone knows as much as we do as researchers in the field, that would probably be the best solution.”
Editor’s Note: The Human-Level Artificial Intelligence conference funded Inverse’s travel and accommodation to cover the event, but the organization has no input over Inverse’s editorial coverage.