Uber’s self-driving fatality shows how much pressure is put on test drivers

Uber’s car could see but not act.
Uber’s car could see but not act.
Image: National Transportation Safety Board via AP
We may earn a commission from links on this page.

A federal investigation into Uber’s fatal self-driving-car crash in March has found that the car’s self-driving system didn’t stop itself from hitting a pedestrian because its emergency-braking capability was turned off by Uber to prevent “erratic vehicle behavior.”

The investigation’s preliminary findings (pdf) highlight the precarious situation autonomous safety drivers are in: They’re put in charge of relatively new technology with safeguards stripped away, and seemingly without access to complete information on what the car sees.

Even though the car saw the pedestrian six seconds before the collision, and recognized emergency braking was needed 1.3 seconds before, it didn’t alert the driver that the pedestrian was seen or that emergency braking was needed. In those six seconds, the self-driving system was wildly confused about what the pedestrian even was: First, it was an unknown object, then another car, then a bicycle. (The pedestrian was walking her bike across the highway.)

Uber’s safety driver did try to avoid the collision less than a second before hitting the pedestrian by moving the steering wheel, but didn’t brake until after the impact occurred, according to the report.

The Volvo model involved in the incident had its own ability to automatically brake in emergency situations, but that feature was also turned off while the computer was driving.