Science

The Case for the Steering Wheel (and Against Truly Driverless Cars)

MIT's David Mindell on the "human-rich, trusted, transparent" future of commuting.

Getty Images

The appeals of the driverless car are multiple, but the argument in favor of letting Robo-Jesus take the wheel gaining traction is the simplest of the bunch: Human drivers are bad. Replacing the tailgating, tindering hoards with sensors makes easy sense if you believe that individual and mass optimizations can be achieved at once. Massachusetts Institute of Technology aeronautics professor David Mindell doesn’t. Mindell says that the idea of going from automatic to automated is a sci-fi dream that ignores 50 years of history. A veteran of autonomous vehicle design — with a focus on the underwater and the aerial — Mindell believes that robots shouldn’t be supplanting humans or even calling shotgun. His notion, outlined in his new book, Our Robots, Ourselves, is that we need AI and humans to work in harmony.

Mindell chatted with Inverse on how that paints an unlikely future for fully autonomous cars.

How did your years of experience with deep ocean research influence your opinion on automated cars?

I spent 25 years in undersea realm, engineering remote and autonomous vehicles, and then I also spent more than a decade in the aerospace world. Starting in the undersea realm, I noticed that the remote vehicles weren’t quite working out as we expected, in that they weren’t cheaper and they weren’t safer. What they did do was all kinds of interesting things, that were really kind of mind-blowing: reconfigure the social dimensions of the work in very important ways, change what it means to be an explorer, change how you do science in the deep ocean.

If you look at the Predator [drone], you see a vehicle that was originally imagined to be totally unmanned, that wouldn’t even need any kind of ground station. Because why would you need a ground station if you’re totally unmanned? What they ended up with was a system that requires more than 150 people to operate it, with very, very complicated and rich human network interactions over several continents. The Predator did not work out the way people had planned, but it did do something different and new that nobody could do before. In many of these examples you see what people start out imagining as full autonomy ends up having human interventions added at various stages, as it moves from the abstract laboratory experiment into actual fielded operations. In the case of Predator, that was a very expensive and painful evolution. It cost a lot of our taxpayer dollars and we ended up with something that’s rather imperfect. Engineers have a word for it that I can’t use in the interview.

Do you see Google headed toward full autonomy? Is it a parallel with the Predator drone?

I think there’s a parallel there. I actually think they’re going to get there. All the same things that pushed the same systems, are going to push them to that, as they move into the field.

I think other automakers are taking a different approach, and it’s going to be a competitive field. We’ll see how it plays out.

And what are these factors pushing systems away from full autonomy?

Some of these are regulatory approvals, right? There’s no way it’s going to get approved without a big red stop button. That’s a human intervention. If it doesn’t work 100 percent perfectly all the time, if it doesn’t work all over every single geography and weather condition, there has to be some kind of human control.

They’re going down a road they believe very strongly in. And I’m just saying the weight of empirical experience shows us a different path.

Using automated submersibles as an example, was there a goal of full autonomy that had to be adjusted? What had to be pared back?

I wouldn’t say paring back, because I think full autonomy is an easier problem, in a lot of cases, than rich, situated, and embedded autonomy.

We built autonomous vehicles, and thought they would go out on their own and do their own thing. And it turns out they have more communications abilities that we had foreseen, early on, so you try to stay in touch with them as much as you can. Even if it’s only a few bits per second. They go out in periodic missions — they go from a manned ship, the manned ship gives them instructions and energy, they go out, and they do things in the ocean. They may be fully on their own and be autonomous for certain periods of time, but that autonomy is always bounded in space and time by returning to the people and exchanging energy and data.

Is that different from what Google is trying to do? If you tell your autonomous vehicle you’d like to go home, and then it takes you there, that’s an element of a human wrapper around the system.

Then you even wonder: What does that instruction look like? Can you change the route along the way? Can you change the route in response to things in the environment? You pretty quickly end at some human role.

But the idea of a human who gets in an automated car, and then dozes off — is that unrealistic?

I don’t think that that’s safe, actually. I’m of that opinion. Maybe it’s possible to engineer that to be safe, but there’s no example of a system that works like that, that has a proven safety record, so it’s notional.

I increasingly think the problem of driverless cars is the problem of AI, which is to say decision-making within a human context. The issues around AI along that line has been debated a lot.

I recently read an argument that, if we cede a certain level of autonomy to a computer onboard a car, that computer will have to have some sort of ethical sense of when it’s appropriate to kill someone. If you’re driving toward a large mass of people, and you have to swerve at the last minute and put the driver at risk in order to avoid crashing into a crowd —

I tend to think you’re much more likely to get killed by a poorly designed robot than by an unethical one. But either one can happen.

Let’s imagine a world where we automate people in, instead of automating people out. That’s generally what we do anyway. So let’s think about doing that — the highest form of technology is not full autonomy but is human-rich, trusted, transparent, and flexible in collaboration with humans. And that’s actually the highest goal. It may be unachievable to get it perfect, but it’s a worthy goal.

You raise this idea of a scale of automation — of 1 to 10, 10 being full automation — and we should be aiming for a 5 on this scale. What would a 5 look like, in terms of a driverless car?

Well I call it the perfect 5, and by that I mean just that the user can move up and down the scale, in real-time, at will, depending on the circumstance. So sometimes the best-suited [level of automation] would be an 8 or a 9, other times it would be a 2 or 3. And the perfect 5 means that you have all of accessible to you.

It doesn’t necessarily mean half-automated and half not. I’m a believer in using technology to make driving safer. And even to relieve the workload, and let you text and, you know, maybe do some other things with your time. But not for you to be completely, 100-percent checked-out and sleeping in the trunk.

So does Google have to rethink its approach?

I see a lot of different approaches being tried out there. I think the marketplace is going to work it out over the next 10 years. There’s a quote in my book from one of the leaders at BMW, saying, ‘People buy our cars because they like driving them, we’d be crazy to automate that out of the story.’ And, I think, not all car manufacturers — but a lot of them — feel that way. It’s a story about liability. I’m a believer — I’m more than a believer, I’m a scholar of this — the person on the front lines, whose skin is on the line, who’s seeing the situation unfold in real-time, ought to have something to add. The claim of full autonomy is really the claim that there’s absolutely nothing that the humans could add that would enhance the system, even though they’re the ones who are physically there and they’re the ones who are at risk.

That’s just not borne out by 50 years of experience with computerized systems.

What’s been the response to your argument? Is there a point that you’re making that’s not resonating with technologists?

The response has, generally, been supportive. I haven’t really received very strong, well-thought-through arguments supported by evidence that full autonomy really is the way to go. I’d be happy to have that conversation, and I want to have that conversation. I’m trying to be provocative here. At the same time, I think a lot of people, and a lot of people within robotics, realize that collaboration is the way of the future, and that robotics habituated within human environments is a worthwhile challenge to solve. I mean that’s an interesting, socially important technically challenging story, and only the kind of narrowest solutions won’t allow it. The issue is not autonomy, the issue is full autonomy.

I’m waiting for the op-ed in the Times that’s saying, no, full autonomy is the solution. Zero human input is the way to go? Bring it on. Let’s talk about it.

Are there people who would take that stance?

I think there are.

Let’s take the example of a robotic vacuum cleaner — a Roomba. Is that fully autonomous or is there still a level of human input, from your perspective?

Well I don’t have one, so I don’t operate one, but you know it still needs to be maintained, it still needs to operate within human contact. It needs to do the job that people want it to do. It’s interesting to ask how many people have Roombas, or how many people have them in their basement collecting dust — versus in their living rooms, actively collecting dust. The Roomba is sort of the one example of the successful robotic consumer product. And to the degree that it succeeded — it did so because it did a job, and it stopped being a robot, and it was something that accomplished something in a human setting.

OK…

So people ask me, when will we have robots in our homes? And I’m not in the business of making predictions, other than to say, ‘When someone solves the problem of how to make them truly collaborative and coexist within the human environment.’ And it’s not me who’s gonna decide when that’s solved; it’s going to be the users who decide when that’s solved.