Innovation

Are self-driving cars safe? The tipping point rests on a psychological illusion

Opinion: Adopting autonomous vehicles is a question of psychology as much as of technology.

by Jean-François Bonnefon and The Reader
Toy vintage car on wooden background
Jasmin Merdan/Moment/Getty Images

According to A Prairie Home Companion, Lake Wobegon is a small (fictional) town in Minnesota where “all the women are strong, all the men are good-looking, and all the children are above average.”

It’s a sentiment everyone is familiar with: In the eyes of their parents, a child is always cuter, funnier, and brighter than average.

But it’s impossible that this is true of all children, isn’t it? In order for one child to be cuter than average, there would have to be others that aren’t. People sometimes speak of the “Lake Wobegon effect” (also known as “illusory superiority”), according to which a large majority of the population judges itself to be above average for a given quality. This effect can reach nearly comical heights. In 1975, a questionnaire was distributed to 600 professors at the University of Nebraska, asking them, among other things, to evaluate their pedagogical abilities; 94 percent said that they were above average. And students weren’t to be outdone. Around the same time, a massive survey asked a million American high school students to evaluate their leadership qualities: Only 2 percent judged themselves to be below average.

This article is adapted from Jean-François Bonnefon’s book The Car That Knew Too Much

Drivers also seem to fall victim to the Lake Wobegon effect. Numerous investigations have suggested both that the vast majority of them think they drive better than average and that they largely overestimate themselves. These investigations are generally based on small samples of less than 200 people, but their accumulated data suggest that the phenomenon is real.

The problem with self-driving cars

What does this have to do with self-driving cars? Calculations have shown that to save the greatest number of lives in the long term, self-driving cars need to be allowed on the road as soon as they are safer than the average driver.

That presents an immediate problem: These autonomous vehicles would have many accidents, and media coverage of these accidents could lead the public to doubt their safety. But this begs another question: Who would want to buy one of these cars?

Put yourself in the place of a rational buyer. You’re told that a self-driving car is 10 percent safer than the average driver. That is, these vehicles have nine accidents for every ten caused by average drivers. Is this car for you? Would you be safer buying it? Yes, if you are an average driver or a below-average driver. But if you drive a lot better than average, statistically, you would be less safe by letting this car take the wheel for you. You will probably decide that the car isn’t for you.

The problem, of course, is that you probably overestimate your driving abilities. You are a victim of the Lake Wobegon effect. Remember the 2 percent of high school students who thought their leadership abilities were below average. If only 2 percent of drivers think they are below average, then only 2 percent of them will be interested in a self-driving car that is only a little better than the average driver.

This simple idea is where I and my collaborators, Iyad Rahwan and Azim Shariff began. First, we had to ask a sample of consumers to evaluate their driving abilities in order to verify the existence of a Lake Wobegon effect.

While we were at it, we decided to conduct our study with a representative sample of the American population so that we could compare the degree of this effect on men, women, the young and less young, more and less educated, and so on. In total, we questioned 3,000 Americans about their driving skills.

Most people think they are a better driver than everyone else.

Debrocke/ClassicStock/Archive Photos/Getty Images

We asked about 1,000 people how many accidents would be avoided if everyone drove as they did. A person who answers “10 percent” to this question considers themselves to be 10 percent safer than the average driver; a person who answers “20 percent” considers themselves to be 20 percent safer, and so on.

The results were spectacular: 93 percent of people questioned thought they were at least 10 percent safer than the average driver. In fact, most of those surveyed thought that if everyone drove as they did, the number of accidents would be reduced by two-thirds or even three-quarters.

To diversify our measurements, we asked about 1,000 other individuals in the study to rate themselves on a scale of 0 to 100, where 0 means “I am the worst driver in the United States,” and 100 means “I am the best driver in the United States,” and 60, for example, would mean “I am a better driver than 60 percent of the drivers in the United States.”

In this case, again, over 80 percent of those surveyed thought they were better than average. In fact, the majority of people thought they were better than 75 percent of drivers. We even observed that 5 percent of those questioned gave themselves a score of 100/100, reserved for the best driver in the United States.

Thus, it’s clear that the people we surveyed overestimate their driving abilities. Remarkably, this overestimation is consistent across social groups. The numbers are identical for men and women, young and old. Level of education doesn’t matter, nor does income, political opinions, religious beliefs, or ethnic origin. The Lake Wobegon effect knows no barriers in gender, age, or social class.

Autonomous vehicles and human psychology

At this point, we could test the second part of our idea. We expected that people would want self-driving cars to have an even higher level of safety than they perceived themselves to have as good drivers. And that is exactly what we observed: The handful of those who thought that they were only 10 percent better than average would be satisfied with self-driving cars that were 10 percent safer than the average driver. Those who thought they were 10 to 50 percent better than average — and they were somewhat more numerous — wanted cars that were 50 percent better than the average driver. And those who thought they were 70 to 95 percent better than the average driver wanted self-driving cars to eliminate around 90 percent of accidents.

These results are worrying because they place doubt on the optimistic scenario, according to which autonomous vehicles could gain the confidence of consumers once they are safer than the average driver. A self-driving car that would eliminate 30 percent of accidents would already be a technical feat that would save many lives — but to do so, it would have to be present on the roads, and our results indicate that it wouldn’t be of interest to the broader public because the very large majority of consumers (wrongly) think they drive better than it does.

This means that adopting autonomous vehicles is a question of psychology as much as of technology. Continued technological research is needed to make self-driving cars as safe as possible, but psychological research is also needed to help people better compare their own driving abilities to those of a self-driving car.

Those self-driving cars will have accidents is a certainty. Will they have fewer than human drivers? Yes, because it won’t be acceptable to put them on the market if they are more dangerous than humans. But, since they will never eliminate all accidents, the first question to ask ourselves is what level of safety must be attained before allowing them on the road in large numbers.

Can self-driving cars eliminate accidents? Probably not.

Found Image Holdings Inc/Corbis Historical/Getty Images

This question is complex because it has a moral dimension, a methodological dimension, and a psychological dimension. From a moral point of view, is it acceptable that self-driving cars have victims, and how many? It would be tempting to adopt a purely “consequentialist” approach when responding to this question; put plainly, from the moment that autonomous vehicles kill fewer people than human drivers do, it no longer matters if they kill since the net consequences are positive. Still, this calculation isn’t as simple as it seems.

Imagine, for example, that self-driving cars are 30 percent safer than human drivers. And to simplify, imagine that human drivers cover one billion miles each year and kill seven people total. Over the same distance, self-driving cars kill only five. Now imagine that everyone switches to self-driving cars and that, discovering how nice it is to let the car drive all by itself, they start driving twice as much. In one year, we would go from seven road deaths (in one billion miles driven by humans) to 10 (in two billion miles driven by self-driving cars). Is that preferable?

Maybe yes. But this statistical calculation doesn’t exhaust all of the ethical considerations: If self-driving cars kill only five people per year whereas humans would have killed seven (so two fewer victims), among those victims, maybe some of them would have survived if they had been driving themselves. Is it morally acceptable that these people are sacrificed for the sake of the greater number?

Once again, maybe yes. It seems to me that a consensus is forming around the statistical argument: Once self-driving cars statistically diminish the number of accidents per mile driven, it is morally acceptable to permit them on the road. But, even if we accept this argument, it won’t be easy to apply it. Autonomous car accidents are rare, but there are few of these cars on the road. To show that they are statistically less accident-prone than humans, we would have to increase their numbers to get more data or imagine new methods of estimating their probability of having an accident.

And even if we could show that autonomous vehicles have (at least a few) fewer accidents than humans, and we allow them out on the road and on the market, there would still be psychological barriers to their adoption. Maybe we could show that these cars have fewer accidents than the average driver, but human drivers largely overestimate the safety of their own driving. Who would buy a car that is 20 percent safer than the average driver when most drivers think that they’re 80 percent safer than average?

Consequently, if we think (and this is my opinion) that it is morally acceptable and even imperative to use autonomous driving to save lives, we must launch an enormous ethical, technical, and psychological campaign to allow us to set security goals for the industry, give regulatory agencies the necessary tools to evaluate them and help citizens understand these goals and make an educated choice when the moment comes for them to decide to adopt autonomous driving or not.

Self-driving cars and accidents

But all of this is only in response to the first question posed by autonomous driving: How many accidents will self-driving cars be authorized to have? The second question is even more difficult: Which accidents should we prioritize eliminating? In other words, which lives do we want to protect first? Those of passengers, pedestrians, cyclists, children? This question is at the heart of the Moral Machine project, which I launched together with Azim and Iyad several years ago.

Everyone, myself included, agrees that the scenarios in Moral Machine are extremely improbable. The car has to choose between two victims of an accident that is absolutely inevitable. That’s not how things happen in real life. Under normal driving conditions, self-driving cars won’t choose whom they should run over. During their moment-to-moment operation, they will just slightly modify the risk incurred by different parties on the road.

If you, as a driver, want to pass a cyclist while another car is preparing to pass you, the lateral position you assume determines the risk incurred by the person on the bicycle, by the other car, and by you. For example, the less space you leave for the cyclist while passing them, the more you increase their risk while diminishing your own and that of the other car. But the risk remains low: Even if you leave only a very small amount of space for cyclists each time you pass them, you could drive your entire life without injuring one of them.

But now, consider the problem from the perspective of self-driving cars. If they systematically leave little space for cyclists, they statistically will have (a slightly) greater chance of injuring them. The cumulative effect of these decisions, made by tens of thousands of cars driving tens of thousands of miles, will be felt in the annual road accident statistics: a few fewer victims among passengers, a few more among cyclists. This is what we (Azim, Iyad, and I) call the statistical trolley problem: It’s not about the car deciding whom to endanger when an accident is already inevitable, but rather deciding who will statistically have a greater chance of being the victim of an accident one day.

Unfortunately, traffic is still traffic, regardless of whether you are in an autonomous vehicle or not.

Joey Curry / 500px/500Px Unreleased Plus/Getty Images

This calculation doesn’t just affect cyclists but also drivers and pedestrians. It also applies to children. In their book on the pioneers of autonomous driving, Lawrence D. Burns and Christopher Shulgan reported that the software of the “Chauffeur” project (the first prototype of the self-driving car developed by Google) had been trained to detect children and expect them to behave more impulsively.

Thus, the software was designed to mistrust a child approaching the road because they might decide to run across it, whereas an adult would wait on the sidewalk. Of course, this programming could have only one goal: giving children more leeway in order to reduce their risk of an accident. But as we’ve seen in the example with the cyclist, reducing the risk for one person on the road often means increasing it (even if only slightly) for someone else. The statistical trolley problem consists in deciding whether such a transfer of risk is acceptable.

Is it possible to pass legislation on this problem? There is a precedent in the history of the automobile: the ban on bull bars in the European Union. These guards at the front of a car are made of several large metal tubes. As their name indicates, they’re designed to protect the car’s frame during accidents involving large animals. They are therefore useful in very specific regions of Australia and Africa. In an urban area, their usefulness is less clear.

Of course, they offer slight protection to passengers in the car, but they also increase the risk of injury to pedestrians and cyclists. In 1996, a British study attempted to estimate this risk. The calculation was difficult, but the experts concluded that bull bars were the cause of two or three additional deaths per year among pedestrians in the United Kingdom.

Thus one can conclude that bull bars very slightly increase the risk incurred by pedestrians. The transfer of risk is very slight, but this report triggered a long process of testing and legislation that ended with a ban on bull bars throughout the European Union.

Passengers to pedestrians

What should we take away from this story? First, a mechanical characteristic of a car can cause risk to be transferred from one category of users to another — in this case, from passengers to pedestrians. Second, this could be considered an ethical problem: Is it morally acceptable to increase the risk for pedestrians in order to protect passengers, and at what point does this transfer of risk become unacceptable? Finally, it is possible to ban a certain mechanical characteristic of cars because it entails a transfer of risk that has been deemed unacceptable.

In principle, nothing prevents us from applying the same strategy to the digital characteristics of self-driving cars. The programming of the cars could cause risk to be transferred. We must decide when this is acceptable or unacceptable and legislate to prohibit transfers of risk that seem unacceptable to us. In practice, however, this strategy runs into several problems.

Self-driving cars may be a net good, even if they have fallacies.

Hulton Deutsch/Corbis Historical/Getty Images

First, the programming of self-driving cars is far more complex than a simple metal bar attached to the front of a car. Any transfer of risk generated by the programming will be the result of myriad small decisions and interactions with the environment, and it will be difficult to predict. This will make the work of manufacturers all the more difficult if they have to satisfy very precise constraints.

Second, we have no concept of what a just distribution of accidents would be. All we have are current statistics on the victims of road accidents, categorized by their role. Of course, we could ask manufacturers not to deviate too much from them and, therefore, minimize the transfer of risk, but what would the ethical foundations of this decision be? The current statistics for accidents do not reflect moral considerations; they are simply the product of drivers’ reflexes and the environment in which they drive. Why should they be given moral legitimacy by demanding that driverless cars, while having fewer accidents, have the same kind of accidents as humans?

Here we touch on the heart of the moral revolution that self-driving cars confront us with. Until now, we have had little reason to wonder if the distribution of accidents is just or unjust because we couldn’t change it much. It doesn’t do any good to ask human drivers to adjust their driving in order to change the statistics. Things are different with self-driving cars, whose programming could be adjusted in such a way that the statistics change. This new power gives us new responsibilities. Of course, we could react in fear, regretting the creation of these cars that know too much and pose questions we would rather not answer. But we must not forget the fundamental lifesaving promise of this technology. It is now up to us to show our courage and decide together which lives we want to save.

This article was originally published on The MIT Reader by Jean-François Bonnefon. Read the original article here.

Related Tags