There’s an old parable of a person leaving the pub late at night and searching in vain for his keys under a streetlamp. A bystander asks where he might have left them. “I dropped them over there,” he says pointing into the darkness. “But this is the only place where there’s light.”
That, in a nutshell, has been Silicon Valley’s self-driving car strategy over the past five years. Faced with the task of designing an autonomous vehicle, Waymo, Uber, Tesla and others decided to go where the light was. That meant pouring on the miles because that’s what was easily tested.
Last year, Silicon Valley companies logged a combined 2 million autonomous miles in California, one of their biggest testbeds. In the end? Miles alone aren’t worth that much. What matters is quality. Because outliers—low-probability but potentially lethal events—are so rare, accounting for them by chance is virtually impossible. Jack Weast, who oversees automated vehicle standards at Mobileye, cites research suggesting that it would require roughly 30 billion miles of real-world testing (pdf). “With a fleet of 100 cars, that would take you about 1,000 years,” he says. “And you better not update the software. If you do, you have to start over.”
But there’s a different approach. Rather than flood the streets with sensor-laden cars, major automakers (and Mobileye, acquired by Intel in 2017) are attempting to design safety in from the start. In the “safety case” technique borrowed from the aviation and nuclear industries, safety is broken down into its component parts, from technical specifications to human psychology. After a “safe” system has been designed, it’s validated through a tight testing loop with software simulation and hardware validation, and (only) then real-world testing.
“We believe very strongly in safety by design,” says Weast. “We try to do formal verification on paper with mathematics and logic, like other safe industries do, rather than write a bunch of code, have no design verification, and try to drive as many miles as you can and gather statistical evidence that what you built is safe.”
Now even Uber is adopting the method, announcing its first “safety case” this week at the Automated Vehicles Symposium in Orlando, Florida. It was a tacit acknowledgment that Silicon Valley’s “move fast and break things” approach wasn’t working—for the company or the public. You can see the opposite tact at work in autonomous trucking company Peloton’s slow, steady rollout of a self-driving semi-truck, or Waymo’s decade-long quest to debut a true robo-taxi.
“There’s been a lack of transparency within the industry about how the tech works and why we think it’s safe,” says Weast. “A lot of the answer has been, ‘It’s proprietary magic, trust me.'”
But rivals are now coming together to agree on common safety principles. This month, 11 companies including Audi, Baidu, BMW, Daimler, and Intel published a white paper entitled “Safety First for Automated Driving.” It was an attempt to standardize on safety, and compete on everything else. “One bad actor,” says Weast, “could shut down the whole market for us.”
A version of this post was originally published in the weekend edition of the Quartz Daily Brief newsletter. Sign up for it here.