Skip to navigationSkip to content

Uber’s self-driving fatality shows how much pressure is put on test drivers

National Transportation Safety Board via AP
Uber’s car could see but not act.
  • Dave Gershgorn
By Dave Gershgorn

Artificial intelligence reporter

Published This article is more than 2 years old.

A federal investigation into Uber’s fatal self-driving-car crash in March has found that the car’s self-driving system didn’t stop itself from hitting a pedestrian because its emergency-braking capability was turned off by Uber to prevent “erratic vehicle behavior.”

The investigation’s preliminary findings (pdf) highlight the precarious situation autonomous safety drivers are in: They’re put in charge of relatively new technology with safeguards stripped away, and seemingly without access to complete information on what the car sees.

Even though the car saw the pedestrian six seconds before the collision, and recognized emergency braking was needed 1.3 seconds before, it didn’t alert the driver that the pedestrian was seen or that emergency braking was needed. In those six seconds, the self-driving system was wildly confused about what the pedestrian even was: First, it was an unknown object, then another car, then a bicycle. (The pedestrian was walking her bike across the highway.)

Uber’s safety driver did try to avoid the collision less than a second before hitting the pedestrian by moving the steering wheel, but didn’t brake until after the impact occurred, according to the report.

The Volvo model involved in the incident had its own ability to automatically brake in emergency situations, but that feature was also turned off while the computer was driving.

📬 Kick off each morning with coffee and the Daily Brief (BYO coffee).

By providing your email, you agree to the Quartz Privacy Policy.