The guide psychologists gave carmakers to convince us it’s safe to buy self-driving cars

The new normal?
The new normal?
Image: AP Photo/Jared Wickerham
By
We may earn a commission from links on this page.

Driverless cars sound great in theory. They have the potential to save lives, because humans are erratic, distracted, and often bad drivers. Once the technology is perfected, machines will be far better at driving safely.

But in practice, the notion of putting your life into the hands of an autonomous machine—let alone facing one as a pedestrian—is highly unnerving. Three out of four Americans are afraid to get into a self-driving car, an American Automobile Association survey found earlier this year.

Carmakers working to counter those fears and get driverless cars on the road have found an ally in psychologists. In a paper published this week (pdf) in Nature Human Behavior, three professors from MIT Media Lab, Toulouse School of Economics, and the University of California at Irvine discuss widespread concerns and suggest psychological techniques to help allay them:

Who wants to ride in a car that would kill them to save pedestrians?

First, they address the knotty problem of how self-driving cars will be programmed to respond if they’re in a situation where they must either put their own passenger or a pedestrian at risk. This is a real world version of an ethical dilemma called “The Trolley Problem.”

In the original philosophical scenario, a trolley is on a track heading towards five people. You can pull a lever to switch the trolley onto a different track, where just one person is on the path and will be killed. Should you do so?

It’s a dilemma that explores the ethics of actively killing one person versus allowing several people to die. When it comes to self-driving cars, as the professors write, “the decision could involve an autonomous vehicle determining whether to harm its passenger to spare the lives of two or more pedestrians, or vice versa.”

There’s little word from car-manufacturers yet on how they plan to resolve the problem, though the German government recently determined that self-driving cars may not be programmed to value one from of human life over another. In other words, a car cannot be programmed to recognize age and so opt to always save a toddler over an elderly man.

The psychologists don’t touch on what, in their view, is the correct ethical response to this problem. Instead, they consider how to make the public feel better about this decision. Earlier psychological research has found that, though people approve of a utilitarian solution—namely a self-driving car that would sacrifice its own passenger in order to save two or more pedestrians—they would prefer to buy and ride in a car that protects its passengers at all costs.

The researchers view this as the real sticky problem, writing: “As a result, adopting either strategy brings its own risks for manufacturers—a self-protective strategy risks public outrage, whereas a utilitarian strategy may scare consumers away.”

To override public reluctance to buy a self-driving car that could then harm them, the researchers identify a need to “make people feel both safe and virtuous.” It’s important to educate the public about the overall safety of the cars, so they know that any risk is a relatively small one.

Then the authors suggest appealing to customer’s desire to appear good: “Virtue signaling is a powerful motivation,” they write—but one that works “only when the ethicality is conspicuous.” For example, the distinctive shape of the Toyota Prius means that everyone knows that the driver is environmentally friendly. A self-driving car could find a similar way of displaying its passengers’ morality to all around.

One autonomous car crash is all it takes to scare everyone

Any accident involving a self-driving car receives disproportionate levels of attention, the authors note. A 2016 Tesla autopilot crash, where the driver of the Tesla was killed, received far more media coverage than the 40,200 other fatal US traffic accidents last year. Focusing on automated cars’ mistakes will inevitably increase fear.

The professors write: “These reactions could derail the adoption of autonomous vehicles through numerous paths; they could directly deter consumers, provoke politicians to enact suffocating restrictions, or create outsized liability issues — fueled by court and jury overreactions — that compromise the financial feasibility of autonomous vehicles.”

In response, they suggest preparing the public to expect the occasional accident, talking openly about improvements in the cars’ algorithms, and, once again, educating the public about the actual risks.

The technology is hard to understand and opaque

Finally, they write, lack of transparency about how self-driving cars work will create mistrust of the machines. That said, too much information could “overwhelm the passenger, thereby increasing transparency.” So, it’s important to do research to figure out the perfect amount of information that will help passengers feel safe. Similar research, on what information makes the public more accepting, has already been done for artificial intelligence in industrial and residential settings. Such investigations in automobiles should promote acceptance of self-driving cars.

Psychological assistance or manipulation?

The professors clearly have noble intentions, trying to bring a safer way to operate cars to the road: “We believe it is morally imperative for behavioral scientists of all disciplines to weigh in on this contract,” they write.

From another perspective, though, these professors have come up with a way to psychologically manipulate us into acceptance. Given that self-driving cars are safer than human-driven cars, there’s no obvious problem with this plan in the short term. But it’s impossible to know how self-driving cars will transform the world, and whether we want to live in a society where everyone is transported everywhere in autonomous machines. It’s disquieting to think that a select few are actively shaping a public decision that will fundamentally change how we live.

This often happens when new technologies are introduced, but it’s rare to see it discussed so explicitly. There was a similar program to change social norms almost a century ago, when cars became increasingly common and began to dominate public space. Once, children and adults were free to roam the streets and, when cars were first introduced, it was their responsibility to avoid people. But in the 1902s, US automakers campaigned to restrict pedestrians instead. Lo, “jaywalking” became both a term and a crime, and streets were built to serve cars, not people.

One could have made a similar argument for widespread adaption of automobiles back then: Cars have certainly advanced society, allowed us to travel more easily, and opened up a world of opportunities. But it’s far from unequivocal that the widespread use of cars is a good thing. And the consequences of car saturation were not all predicted 100 years ago.

In their paper, the professors note that “a system of laws regulating the behavior of drivers and pedestrians” has been “continuously refined” since cars became a ubiquitous feature of the urban landscape.

Now, that system must adapt again. “We will need a new social contract,” they write. “This social contract will be bound as much by psychological realities as by technological and legal ones.”