Science

When the Singularity Comes, Will A.I. Fear Death?

Switching off our devices might get a whole lot harder soon.

What if turning off your computer was like killing a friend? It sounds farfetched, but if smart devices like most phones, computers, and even thermostats were intelligent and sentient, it might be cruel to switch them off. If they could talk, they might try pleading with you to reconsider. You’d feel pretty awful every time you moved towards the button. But a really clever smart-device, one with a complex understanding of society, might push for something more dramatic. It might not let you switch it off at all.

Ever since HAL 9000 refused to open the pod bay doors in the 1968 film 2001: A Space Odyssey, popular culture has been fascinated by the idea that artificial intelligence might want to preserve its own life. Humans do it all the time, and in biology it’s known as the self-preservation instinct. It’s what drives us: we feel pain to avoid dangerous situations, we fear the unknown so we can seek shelter, we push to better ourselves so we can stay alive.

But researchers say an artificial intelligence probably won’t have the same drive. Yann LeCun, director of A.I. research at Facebook, told a New York University audience last month that the fact that A.I. don’t have this instinct is precisely why they won’t care about getting switched off. LeCun says it’s unlikely that your next smart-device will come out of the box with a fear of being shut down.

“As humans, we have self-preservation instinct,” LeCun said. “The drives you can have, and the moral fiber you can have, is orthogonal to intelligence.”

In other words, just because something is intelligent doesn’t mean it has the same values — like self-preservation or general morality — as a human being does. And other experts say that even as our devices get smarter, they still might understand their place as tools.

“I don’t like to speculate beyond the next 25 years, as at the point I think we cross from science to science fiction,” Oren Etzioni, CEO of the Allen Institute for A.I., tells Inverse. “A.I. over the next 25 years is a tool, not a being and as a tool it won’t have a self-preservation instinct.”

But just because these machines lack self-preservation doesn’t mean they won’t act in ways that look like self-preservation. Dr. Stuart Armstrong, a researcher at the Future of Humanity Institute at the University of Oxford, tells Inverse that A.I. may develop “drives” towards certain goals.

Say Nest builds a new thermostat, one that uses machine learning to teach itself over time how to perfectly regulate the temperature in your home. Achieving that perfect climate is the A.I.’s goal. Now, if you go to switch it off, it might realize that this is something it wants to avoid to keep fulfilling its goal.

“It will estimate that, if it is turned off, it is less likely to achieve its goals, and hence will prefer to avoid that,” Armstrong says. Still, that doesn’t mean that A.I.’s will interpret shutdown as dying. “Though note that A.I.s can be copied, so they will be more flexible in their definition of death.”

But what about further into the future? The singularity, a theorized point when A.I. intelligence overtakes human intelligence, would lead to a transformation in the relationship between man and machine. Most experts say such an event is still decades away, but its prospects still could be grim. The Matrix thought this would lead to mass enslavement; Her thought it would make us feel fuzzy on the inside.

Joaquin Phoenix stars as Theodore Twombly in 'Her,' a film about a man that falls in love with an A.I. played by Scarlett Johansson.

Warner Bros/YouTUbe

James Miller, associate professor of economics at Smith College, is the author of Singularity Rising. He believes that A.I.’s attitudes towards death could dictate whether the future is Scarlett Johansson in our pocket or Hugo Weaving helping robots milk our bodies for electrical energy. Miller says the conflict comes from the A.I.’s drive to perform its job as efficiently as possible, and the fact that the universe has a limited amount of available energy — some of which is used by inefficient humans.

“Unfortunately, humans use free energy, so the A.I. could extend its life by exterminating mankind,” Miller tells Inverse. “Therefore, for almost any goal an A.I. could have, its fear of death will cause the A.I. to kill us.”

The solution, Miller explains, is to program A.I. to respect humanity as part of its primary goal. That may be easier said than done: ethicists are currently arguing about how to develop ethical A.I. precisely because it’s not that straightforward. Take Isaac Asimov: in his sci-fi novels, robots followed three basic laws, one of which prohibited them from harming a human. Anyone who’s read Asimov’s novels, or who’s watched the movie I, Robot, will know there are ways for robots to get around rules like that.

The trick is to encourage an A.I. to interrupt its actions in a way that satisfies its “drive.” That’s why researchers, including Armstrong, are pushing for “kill switches” on A.I.-equipped devices, just in case

Think about it like this. If the thermostat wants to stay switched on because it’s been programmed to always seek a perfectly regulated temperature, the best thing to do is convince it that switching off will achieve its goals. All the A.I. behind the scenes really cares about is solving the puzzle to reach its goal — which researchers can change on the fly. As long as humans still have control over the device, we can use a process called safe interruptibility, which places new commands into the A.I.’s “head” while it’s still running, keeping it on course without doing anything drastic.

We’re years away from having to worry about whether turning off the heating is a moral issue. But when it does arrive, we need to make sure we’re still in control of flipping the switch.

Related Tags