Ford CEO Mark Fields is cautiously optimistic about the future of the self-driving car industry, but his biggest fear for the future of autonomy might have already happened.
“The one fear I have is if anybody in this industry tries to jump the gun and maybe get a beta-test product out there that — God forbid — has an event, an accident or something, that’s going to cause people to pause,” Fields said at the Ford Mobility Summit, a panel discussion with two New York City Transportation officials.
“We’ve started to see some of the consequences of that recently,” moderator Janette Sadik-Khan, a former NYC DoT commissioner said right after Field’s statement. The subtext here, even though neither party referred to the event directly, is that semi-autonomous systems have already come under scrutiny after a deadly accident. In May of last year, Tesla owner Joshua Brown died after crashing into a tractor-trailer with his Autopilot system engaged. The National Highway Transportation Safety Board began a large, public investigation (which Tesla CEO Elon Musk thought was unnecessary), and eventually decided that Tesla’s systems were not at fault. The NHTSA’s final report found no safety flaws with Tesla’s design and closed the investigation, but noted that said flaws could still exist.
There’s a strong case to be made that rolling out autonomous software in a progressive system actively encourages drivers to test the limits of what their vehicles are capable of, and can mislead drivers into thinking assisted-driver systems are more competent than they are.
Ultimately, in Brown’s tragic case, most investigations have concluded that human negligence led to the accident, not a defect of systems, but what Fields is essentially saying is that the gray area between full driver control and full, hands-off autonomy is a minefield that could get people hurt or killed.
Most U.S. automakers who aren’t Tesla are taking a different strategy. We’ve know for a while that most of the major companies have autonomous systems that are just as good if not better than Tesla’s Autopilot, but the industry strategy Fields is advocating here is to keep all that tech under wraps until his company is sure that it’s safe. Fields mentioned at the Mobility Summit that Ford is barely researching level 3 autonomy, AKA “conditional autonomy,” where autonomous systems can steer and watch the road, but have to hand off control to a human driver when things get too complicated. The hand off is the most dangerous part of the process, and Ford and others have decided that they don’t want to put their toes in the autonomous waters until they have better systems for it.
Currently, Tesla’s most advanced version of Autopilot is sitting comfortably in level 2 autonomy, where the car can drive but needs an attentive human driver at all times to take control if necessary. All Tesla vehicles in production now have all the requisite hardware for full autonomy, but the software (or regulations around it) still hasn’t quite caught up.
It’s worth noting that Tesla’s business plan isn’t necessarily less safe for consumers overall. As Musk loves to point out, some of the benefits from Tesla’s self-driving technology could have a huge impact on road safety even without full autonomous control of vehicles. The viral videos of Tesla’s Autopiloting out of freeway collisions are a dramatic example of this, but behind the scenes Musk says the progressive Autopilot updates have drastically reduced crash rates in vehicles — as long as drivers are using the features properly, the assisted systems make them much safer.
Musk said with HW2 updates, he wants to increase that number to a 90 percent decrease. It’s an interesting difference in thinking between Tesla and some of the traditional automakers. During the summit, Fields spoke of fully autonomous cars as if they were an almost fail-proof design that would never kill their occupants. But he probably knows that even if Ford rolls out fully-autonomous vehicles by 2021, there will still be bugs and defects in every system — human programming is inherently fallible, after all — and there will be accidents. Tesla’s plan is sort of more of an acceptance of risk — at some point, Autopilot will probably be at fault for a driver or passenger’s death, but the larger benefits should drastically increase driver safety overall while the company proceeds toward the lofty goal of true, near-flawless autonomy. But Fields has a point: we should all hope that incident doesn’t come too soon, and that when it does the government responds to it swiftly and judiciously.
“The thing that keeps me up most is that one of our competitors tries to go out there early and there’s a death or an accident,” Fields said. “Because that will just solidify in people’s minds that autonomous vehicles are not ready for prime time.”