Science

Will Driverless Cars Be Ethical Killers?

When utilitarianism crashes into automation, decisions cost lives.

Getty

Will driverless cars need to kill people? It sounds preposterous until you think about it for half a second and conclude, “yes,” then think about it for another half a second and think, “no.” The moral conundrum at play is old enough to have a name, “The TroIley Problem.” In its classic formulation, it goes like this: You’re headed down a track with five people tied to it, but you can throw a switch to change to another track, which has only one person on it — what do you do? Don’t feel obliged to answer the question; it’s a thought exercise. Also, the people making self-driving cars are going to have to answer it for all of us.

An analysis from a trio of computer scientists and psychologists, published at arXiv in October, explores the psychology of people in driverless cars. The researchers had 400 subjects read about a driverless car that would harm its human passenger to save a crowd. The conclusion was telling. People liked the idea, but they didn’t want the car. People like thought exercises a lot more when they’re not real and their lives aren’t on the line. The question posed by this scenario is, from a real-world standpoint, about programming. Should a driverless car prioritize the safety of everyone, of its passenger, or no one?

We put the question to our science writers. Things got weird.

Ben Guarino: I’m not convinced that driverless cars will need to assess if they ought to kill people ethically (or, you know, at all). But what do you guys think?

Neel Patel: I think car manufacturers will have to cross this bridge and modify the technology in some way that it makes these judgements or evaluations. This isn’t so much a problem if every car on the road is automated, but there’s going to be a transition period where some cars are driverless and some aren’t. And during that period, cars will need to make some kind of judgement call.

Right now, when an accident is about to occur, every driver of any car pretty much works towards self-preservation. Even if a person’s actions actually cause more people to be injured or killed in adjacent vehicles, we can’t really blame the driver for making this move — it’s a survival instinct built into pretty much everyone who’s ever lived. We can understand it and identify with it.

But autonomous vehicles won’t off that moral luxury. With how many accidents occur every year in the U.S. (there were over 30,000 deaths in 2012), there will be some situation where a car’s actions will result in more deaths than otherwise. This won’t be okay if we’re talking about four size small body bags versus one grandfather.

And there will still be problems because no one wants to get in a car programmed to sacrifice passengers. Americans have never embraced utilitarianism very well.

BG: I get that autonomous vehicles won’t be programmed with “instinct” — but they will have better awareness and reflex, which I would argue offers them an advantage over human reactions and eliminates the need for ethical decisions in a lot of these philosophical corner cases. In the scenario where a kid darts out into the road in one lane and there’s a van incoming in the other, a human has to process a lot of information and make a snap judgment pretty quickly. But an autonomous or technologically-enhanced car has more than the two forward-facing eyes it needs to keep on the road, right? Instead of just responding when there’s a kid in the path of your tires and the car needs to chose dead kid or van crash, it could slow as soon as something moves off the curb and into the street, before a human would be able to react.

These above-human responses are why a lot of the accidents involving driverless cars are rear-ends, because we’re not used to drivers who are able to be so reflexive.

NP: Definitely agree with your points, but I’m still concerned about the transition period, when we roll over from human drivers to robotic ones. Wouldn’t humans still be the wrench that fucks up the whole machine? Or do you think the technology could account well-enough for human error on the part of other cars? Thoughts?

Yasmin Tayag: The advantage autonomous cars have over humans is that they aren’t prone to emotion, which, in the case of most car crashes, manifests as panic. Where a freaked-out human might swerve maniacally to avoid the kid or the van, the self-driving car will, to Ben’s point, simply slow down. Doesn’t it make sense to preserve objectivity — the one thing that will make autonomous cars superior to human drivers? On the road, the only thing that should matter to the driverless car is the data on hand.

I do think that human drivers will always demand the option to override the car’s decision — even when we’ve fully made the switch to robotic cars — and that, I think, is the real cause for concern. I’m not sure the transition period will ever really end.

BG: Humans are bad wrenches and worse utilitarians. My issue with the claim that we should want our robots to somehow be ethical killers is that it assumes that humans know when to be ethical killers. If I’m driving the speed limit, and something jumps in front of me, nobody asks me about the greater good in this situation. You either hit the kid or the van and it sucks for everyone involved. Why should we expect driverless cars to make a superior moral call in these scenarios when nobody truly expects humans to do so. We’re the ones who gave the universe Kant and we’re still not getting it done. Driverless cars should simply focus on being the best collision avoiders they can be.

NP: But I think we actually can expect driverless cars to make a superior moral call. This depends, of course, on agreed-upon ethics, but let’s look at one hypothetical.

Generally speaking, we’re going to prioritize the safety of children over adults. Let’s say in the future, all cars are fitted with technology that knows the general age of all passengers in each car on the road — and this data is accessible and shared with all cars on a given road. If an accident is about to occur because of unforeseen circumstances, those cars can communicate with one-another to ensure the safety of the children over other, older passengers on the road. I don’t really imagine a lot of people putting forth a successful counter-argument to that kind of arrangement.

And the cars are able to do this better than humans because they have access to more information that people can, and can make these evaluations faster — without being affected by physiological effects that could cloud judgement, like emotions or hormonal responses created by high-stress situations. In this instance, they actually can make morally superior choices.

How well you can program something like that, however, remains to be seen.…

YT: Yo Neel, who says we’re going to prioritize the safety of children over adults? Isn’t there something deeply wrong with assigning a different value to the life of a toddler to the life of a senior citizen? If we’re going to go the moral-car route, I would think that making an equal effort to save all lives is key. How are people going to react when they find out that car companies have made value judgements on their lives? As I said before, we need to keep human subjectivity out of programming as much as possible. That’s what always fucks things up.

BG: I’m OK with the men on the Titanic giving up their places to women and children on lifeboats because it was self-sacrifice. If there were a robot separating the men to drown from the women and children, that robot would be smashed with Mjolnir because it is the villain in an Avengers movie.

Plus, no insurer or legislator would ever be cool with a vehicle that could make some sort of lethal calculation without human control.

NP: You’re never going to completely eradicate subjectivity out of automated programming. Bias is inherent in automated systems designed to learn from experience. Instead of trying to mitigate this problem, we should tackle it and morph that bias in a way that serves people better. This is the future whether we like it or not.

I’m voting Ultron in 2016.