When experts want to illustrate the potential safety of self-driving cars, they point to the remarkable safety record of modern aircraft, and rightly so. Globally, only one fatal accident occurs for every 3 million large commercial passenger flights (down by a factor of 16 since the 1970s). Last year, US airlines saw their first fatality since 2009.
But the pilots shouldn’t get all the credit. Commercial aircraft are only in the hands of a human for about three to six minutes of each flight—mostly during takeoff and landing. The rest of the time automated systems are in charge. For at least a decade, technology has existed for commercial aircraft to fly with little to no human assistance. The reason they don’t is more a matter of regulation—and human psychology—than technology itself.
Experts have told Quartz that air travel is seen as the gold standard for autonomous safety. After Boeing, airplanes can no longer serve as quite the same analogy for self-driving cars.
Two crashes of Boeing’s 737 Max 8 aircraft have left 346 people dead in the last six months. It’s not the technology that failed (although flight sensors did appear to malfunction on both flights and the software lacked systems to prevent faulty readings). In its quest to update its 1960s-era (paywall) aircraft design to carry more passengers and achieve better fuel efficiency on the cheap, the aerospace company and its regulators oversaw a cascade of failures that compounded errors at every step. The resulted in two flight crews fighting—and failing—to wrest control of their aircraft from an autonomous steering system that repeatedly forced their aircraft into a dive.
Safety features were sold as expensive extras instead of standard equipment. Pilot re-raining was virtually ignored (“an iPad lesson for an hour,” one pilot told Quartz). Despite redesigning the aircraft’s airframe and engines, Boeing portrayed the changes as minimal, even though the changes altered the aircraft’s handling—a software system to prevent stalling was added for just this reason. No redundant systems prevented the software from forcing a Max 8 into a dive when the anti-stall system was engaged, and confusing cockpit indicators did not show pilots how to override the system in a crisis. During the plane’s development, the US Federal Aviation Administration (FAA) didn’t raise major objections. When problems did begin to emerge, Boeing dismissed critics, delivered fixes slowly, and insisted government officials not ground the fleet before the second accident. Those decisions turned a preventable technical mistake into a systematic failure that killed hundreds.
After Boeing, the question for self-driving cars is no longer just whether we can trust the technology on our roads—the question is whether we can trust companies and regulators developing such technologies at all. Boeing’s cold-blooded calculation when it came to risking other people’s lives hint at the problems facing certification of automated vehicles (AVs). The software for self-driving cars will be far more complicated, and untested, than those in airplanes.
Car companies take notice
The 737 Max’s gruesome death toll has already alarmed at least one auto industry executive. Dieter Zetsche, chief executive at Daimler, told a conference in China on March 24 that Boeing’s crisis hints at what will happen if autonomous algorithms go awry. “Even if autonomous cars are ten times safer than those driven by humans, it takes one spectacular incident to make it much harder to win widespread acceptance,” Zetsche said. “If you look at what is happening with Boeing then you can imagine what happens when such a system has an incident.” In fact, that was the genesis of the FAA. Two commercial airlines collided in mid-air over the Grand Canyon in 1956—at the time, it was the deadliest accident in commercial aviation history with 128 deaths. The tragedy prompted the US to establish the FAA. Even its predecessor, the Civil Aeronautics Authority, was only organized decades after the Wright Brothers’ first powered aircraft at Kitty Hawk.
A skeptical public is now looking out for clues to the safety of AVs as car companies (and governments) race to roll them out fast as they can. So far, they’re not reassured. A 2017 survey by the Pew Research Center found less than half of Americans actually want to ride in a driverless car—42% of Americans cited a “lack of trust” as the reason. Since asking the question in 2014, there has been little shift in public attitudes.
It’s not that humans are all that safe. Every year, cars kill more than 1 million people worldwide. In the US, human error is behind 94% (pdf) of the 37,000 deaths on US roads each year. But it’s still far from clear whether self-driving cars can really be any safer (despite claims from companies like Tesla).
Statistical comparisons of self-driving miles traveled are still not comparable with the entire population of regular drivers, as AVs are severely limited in the places and conditions in which they currently operate. Counterexamples have left an indelible impression, such as the video of a Uber’s autonomous Volvo (with a safety driver behind the wheel) striking and killing a woman at night in Tempe, Arizona last March. There have also been at least three lethal accidents involving Tesla drivers handing over too much control to their cars’ Autopilot systems.
Bryant Walker Smith, a law professor at the University of South Carolina who studies the implications of autonomous vehicles, wrote that these incidents have occurred “uncomfortably soon” in the history of automated driving. AVs are still nowhere near today’s human safety standard of one fatality for every 100 million vehicle miles traveled in the US.
But we are now entering a world in which computers will make judgments for us in inherently uncertain and potentially deadly situations. They will be wrong more often than we have come to expect of today’s automated systems in factories and airliners. The unpredictable environment of the open road is far more complex than inside a building or in commercial airspace.
That makes safety regulation an existential question for the industry. Steve Shladover, a research engineer at the University of California, Berkeley, says Boeing’s failures suggest a more muscular regulatory body is needed to avoid mistakes like Boeing’s. The $211-billion company, with decades of experience launching rockets and manufacturing airplanes, somehow overlooked basic design principles of automated vehicles. Pilots of the downed 737 Max aircraft were given few options to save themselves. Boeing stuck with paper manuals (paywall) rather than update its cockpit display with electronic systems, as they would have triggered more pilot training. The Lion Air pilots on the tragic Indonesia flight were flipping through the manual as they fought to regain control of the jet which crashed into the Java Sea, killing all 189 people aboard.
“I expect that many of the people working on automated driving systems are thinking more carefully about these things in the aftermath of the Boeing experience, hopefully reducing the risk of similar problems on our roads,” Shladover said over email. “We do need meaningful independent regulatory oversight of the developers of automation systems and can’t just depend on them to police themselves.”
It’s unlikely this sort of regulation will come from the US. For years, and especially under the Trump administration, corporations have increasingly been left to oversee themselves. The FAA has steadily handed over aspects of its airworthiness certification process to airplane manufacturers. In the case of the 737 Max flight software, the FAA approved a team of Boeing’s own engineers to “conduct inspections and tests and issue certificates on behalf of the FAA.” The agency later approved Boeing’s findings. The US government is now conducting a criminal investigation and reviewing the FAA’s certification process of the 737 Max.
Oversight outsourcing goes right to the top. During the initial aftermath of the second 737 crash, the US appeared to bow to pressure from Boeing. CEO Dennis Muilenberg appealed directly to president Trump to keep the 737 Max in the air. Trump announced the US was grounding the 737 Max 8 and 9 only three days after dozens of countries had banned the aircraft. Trump told reporters at the time that “we didn’t have to make this decision today. We could have delayed it. We maybe didn’t have to make it at all. But I felt it was important both psychologically and in a lot of other ways.” Acting FAA administrator Dan Elwell later claimed the decision was “fact-based” and due to “new satellite data available this morning” tracking flight patterns. Trump’s next pick for the head of the FAA, Steve Dickson, a former Delta executive, is unlikely to change this pattern of industry influence over the regulator. (Before that, Trump had considered nominating his personal pilot, John Dunkin, for the job but elicited opposition from Congress.)
Who can you trust?
It’s unclear how much of a lasting impression the 737 Max crashes will leave on the public. Ultimately, Smith says, technology’s benefits tend to wear down people’s distrust once a technology appears. There are already plenty of early adopters ready for autonomous vehicles. Of the 44% of Americans already willing to jump to self-driving cars, avoiding stress, multi-tasking, independence, and a “cool…experience” were the top reasons cited in Pew’s survey. It’s easy to see how people will embrace the technology once it becomes more ubiquitous—even if safety fears are not adequately addressed.
Smith argues that means we need to focus “much less on whether the public will trust automated vehicles and much more on whether the companies developing and deploying these vehicles are worthy of our trust.” Accidents will happen, and trust in the technology will suffer in the aftermath. But a well-designed system of corporate and government accountability should be able to correct for this.
Despite the FAA’s recent lapse, the aviation industry exhaustively studies crashes and proactively forces airlines and manufacturers to improve their safety record by adding features or redesigning aircraft. There’s nothing quite like that in the car industry, writes Steve Blank, a Silicon Valley entrepreneur and adjunct professor at Stanford University. Carmakers and vehicle safety regulators tend to be reactive instead of proactive. No approval process exists for each car model. Manufacturers self-certify their vehicles based on federal safety standards, and then regulators act only after a defect is detected, triggering a recall in serious cases. Yet fleets of self-driving cars share more in common with complex semi-autonomous aircraft than an earlier generation’s Oldsmobile. A proactive testing regime would make sense, but no framework exists to do that.
The latest guidance from the US transportation department allows car companies to self-certify their new AV technologies without safety tests. It’s not clear if a 20th-century regulatory framework, designed to encourage innovation and reduce costs for manual carmakers, will work when there are millions of manual, and semi-automated, and autonomous cars pouring onto our roads.
But when the US tried to pass automated driving legislation last year in Congress, it failed, despite the fact that China, South Korea, and others have already rolled out regulations for the technology.
“Some senators distrusted the technologies and the developers of those technologies and the regulators of those technologies,” writes Smith. “Given this lack of trust, much of it deserved, it’s hard to see how Congress would be comfortable giving the administration the authority and flexibility to regulate in this area.”