One of Google’s self-driving cars is partly to blame for a fender bender in California

“Be honest: Which one of you was it?”
“Be honest: Which one of you was it?”
Image: AP Photo/Eric Risberg,
We may earn a commission from links on this page.

Google’s self-driving cars have been in a few minor accidents over the last few years, but in every case so far, the accident was the result of something a human was doing. Now it seems a car itself was at least partly to blame for a small accident in Silicon Valley, by making a prediction not unlike one a human driver would make.

Last month, a Google self-driving car—which was not being driven by the human tester inside the car at the time—hit a public bus as the car was trying to make a right turn, the AP reported. No one was injured in the incident. Quartz reached out to Google for details and to confirm that the car had been driving itself at the time of the incident. The company sent back a snippet from a report it plans to release tomorrow on the state of its self-driving program:

On February 14, our vehicle was driving autonomously and had pulled toward the right-hand curb to prepare for a right turn.  It then detected sandbags near a storm drain blocking its path, so it needed to come to a stop.  After waiting for some other vehicles to pass, our vehicle, still in autonomous mode, began angling back toward the center of the lane at around 2 mph—and made contact with the side of a passing bus traveling at 15 mph.  Our car had detected the approaching bus, but predicted that it would yield to us because we were ahead of it. (You can read the details below in the report we submitted to the CA DMV.)

Our test driver, who had been watching the bus in the mirror, also expected the bus to slow or stop. And we can imagine the bus driver assumed we were going to stay put.  Unfortunately, all these assumptions led us to the same spot in the lane at the same time.  This type of misunderstanding happens between human drivers on the road every day.

This is a classic example of the negotiation that’s a normal part of driving—we’re all trying to predict each other’s movements. In this case, we clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision. That said, our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.

It does seem that—rather like humans would—the Google car thought it was fine to proceed, as the bus was behind it at first. Google told Quartz that it’s reviewed the incident and has tweaked its software for this situation. “Our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles, and we hope to handle situations like this more gracefully in the future,” the company said.

Quartz reached out to the California Department of Motor Vehicles to see if the agency was reviewing the incident and has not yet received a response.

At the beginning of February, Google proclaimed that its cars have driven over 1.4 million miles on roads around the US since 2009, and that it’s simulated millions of miles of experience, which are also fed into the cars. But as this incident—and the opinion of experts in the field—points out, that’s not that much feedback in the grand scheme of things. (For reference, Americans drove some 3.1 trillion miles in 2015.)

The US National Highway Traffic Safety Administration recently said that it now considers self-driving cars like Google’s to be legal drivers, but there was no indication whether this fender bender would affect Google’s no-claims bonus on its insurance.

President Obama’s last budget recently included incentives to spur the development and implementation of self-driving cars in the US. While it might not look great that this rolling computer has hit a bus on its own, it’s worth remembering that there were roughly 30,000 fatal car accidents in the US in 2014 alone. While self-driving car technology still has a way to go, the question might be whether we need self-driving cars with an error rate of zero, or an error rate that’s just considerably better than ours.