When news broke Monday that one of Google’s self-driving cars hit a bus, the company released a statement that included a snippet of the report released today when asked by the Associated Press about the accident.
The February report expands on the statement of culpability and attempts to emphasize the normality of the accident. Here’s what happened:
- The self-driving car detected an approaching bus, but predicted the bus would yield. The human test-driver figured the same.
- Meanwhile, the driver of the bus thought the car wasn’t going to pull out of its lane. It was a normal case of wrong assumptions — except, not so normal because it involved experimental artificial intelligence.
“This type of misunderstanding happens between human drivers on the road every day,” reads the report. “This is a classic example of the negotiation that’s a normal part of driving — we’re all trying to predict each other’s movements.”
The report also mentions that the self-driving car had originally come to a stop because it had detected sandbags near a storm drain — a detail that, sure, gives more color to why this happened, but mostly seems to be inserted as a brag of the car’s self-detection ability.
The company says it has reviewed the incident and “thousands of variations on it” in its simulator and have made the proper refinements to the software.
“Our cars will more deeply understand that buses and other large vehicles are less likely to yield to us than other types of vehicles, and we hope to handle situations like this more gracefully in the future.”
If you liked this article, check out this video: "Bob Ross As Seen By Google's Deepdream"