The federal government has announced that it will treat the A.I. computer in every Google driverless car as it would a human.
The decision came in a letter from the chief counsel for the National Highway Traffic Safety Administration and announces that “if no human occupant of the vehicle can actually drive the vehicle, it is more reasonable to identify the ‘driver’ as whatever (as opposed to whoever) is doing the driving.”
This is a significant step, since it opens the door for the regulation of completely driverless vehicles, where no manual override is possible. “This letter will one day be in a museum of technology history — or at least legal history,” Bryant Walker Smith, an assistant professor at the University of South Carolina School of Law, told Wired. Still, many complicated legal hurdles remain for Google and its competitors.
How do you regulate a car with no driver?
It’s a more complicated question than you might think. That’s because all of the NHTSA’s federal motor vehicle safety standards assume that a human person is operating the vehicle. That language is embedded in the thousands of pages of rules that Google must prove it has complied with before their car can be sold. For example, the rules describe that the brake pedal must be within reach of the driver’s foot, and the parking brake must be operated by the driver’s hand or foot.
Presumably, Google’s computer will have neither hands nor feet. So how will the company resolve this? The letter suggests a couple of routes. One is to change the rules, so that the language reflects the possibility of a computer-driven car. That, of course, will take time.
In the interim, Google could apply for exemptions at every point where they cannot meet the standards because of the design of their car. The rules allow for exceptions where the company can prove their model is just as safe. Google has yet to go this route, but would have to do so in the absence of regulatory reform, if the vehicles are going to make it onto the road.
How to monitor a computer?
There’s another problem, too. Even where the NHTSA accepts that the computer is the driver of the vehicle, the regulator lacks the testing protocols to determine whether or not Google has complied with a given provision. Take, for example, the rearview mirror. The rules specify that an image of what’s going on behind the vehicle be displayed to the driver. Given that the computer is the driver, it makes sense that Google’s cars would continuously feed information about what’s going on behind the car to the A.I. system. But how will the regulator test that this information is in fact being sent, and being received? It’s not as easy as taking a seat behind the wheel and looking at the mirror, and no protocols exist for how such a test might be completed.
Google still has a long way to go to jump through the hoops that the feds will require before driverless cars can be sold to the public. The regulatory challenges are probably at least as big as the technological challenges. But take comfort in the fact that Google self-driving cars are already on the road in Mountainview, California and in Austin, Texas. These prototypes aren’t fully driverless — a human operator can take over the controls in a tight spot — but they’ve racked up an impressive 1.42 miles of fully autonomous miles to date.
The driverless future is coming, and governments would be wise to get their archaic law books in order.