Science

Conscious A.I. Will Give Us Meaningful Relationships

A little empathy could go a long way.

A man leaning onto a terrace fence with blurred lights of a city in the background

By now, everyone’s familiar with the tale: artificial intelligence achieves consciousness, judges that humans are extraneous, and then enslaves or destroys us all. Think 2015’s Ex Machina, 2004’s I, Robot, 1984’s Terminator. The other side of the coin, though, tends to make fewer headlines. Spike Jonze’s 2013 film Her offers a sweeter narrative about conscious A.I.: a future in which a lonely writer can fall in love with an A.I. assistant named Samantha. Her eventual transition from conscious to super-intelligent does not signal the demise of our species — it just breaks Joaquin Phoenix’s heart.

“Here, the fellow was in love with Samantha, who was basically a program,” explains Susan Schneider, a professor at the University of Connecticut, who has long worked in both philosophy of mind and cognitive science. “Wouldn’t that relationship feel to him incredibly empty if she wasn’t in fact conscious?”

So while some futurists worry in public that artificial intelligence will attain human-level consciousness very soon, and that, when it does, it will extinguish humanity, other A.I. engineers think such speculation is ridiculous. But few ask whether conscious A.I. will pose more or less of an existential threat than unconscious A.I.

What Can Unconscious A.I. Do?

While semi-capable A.I.s already exist, the latest versions are beginning to learn by seeing, reading, viewing, watching, and searching. They’re starting to listen, hear, understand, respond, and speak.

Put a million photos in front of a narrow deep-learning algorithm with one task, and it’ll begin to tell you how many cats are in a photo. Give a million scientific journal articles to an A.I. and it’ll eventually know about complex drug interactions. Ask Siri or Alexa or M or Cortana or Google a multifaceted question and it’ll speak back an answer. A.I.s are lovers of sights and sounds. It won’t take engineers long to chip away at the other senses.

Meanwhile, these same engineers are giving A.I.s senses that even humans don’t have, like the ability to pick out trends in overwhelmingly huge data sets, or to win at the stock market, or to be so good at an ancient game that it’s no longer a game.

The almighty brain.

Wikimedia Commons

What Can Conscious A.I. Do?

But there’s another sort of sense that’s no one’s been able to even gesture at in code. In short, it’s something like creative wisdom. Or consciousness. Or just being human. Our brains run the ultimate, most sophisticated program in known existence. We can know and learn and ponder and muse. We can empathize, regret, love, and hate. The list is pretty darn expansive. Unless engineers pursue a full human emulation, A.I. likely won’t need some of these capacities.

Schneider tells Inverse that A.I. might not even need consciousness. If A.I. ever gets conscious, it could just be a novelty, an accessory — and that we might need to worry more about cold, calculating, mindless A.I.s than we do humanoid machines.

Potential Futures

In one future, we’ll soon develop what’s known as artificial general intelligence (AGI) — an A.I. system that can reproduce and match human intellect. Anything beyond that is known as a superintelligence, which means an A.I. that surpasses human intellect. This is what Scarlett Johansson’s Samantha attained in Her.

There’s also going to be debate around if we should identify an A.I. as conscious or unconscious. If you take Schneider’s view, consciousness is “the feeling of what it’s like to be alive.” In essence, it’s just awareness — and not even self-awareness. “Whenever you’re awake, and even when you’re dreaming, there’s something it feels like to be you,” she tells Inverse. “When you see the hues of a sunset, or smell an espresso, you’re having experience. That’s what we’re talking about, here, and we’re asking if androids, or A.I. more generally, can have experience.”

If it’s not yet clear why this debate even matters, it’s because we may wind up encountering moral quandaries with such machines, or even androids like see on HBO’s Westworld, which asks its audience these questions. In other words: When, if ever, is it wrong to harm an apparently sentient android?

Concerned intellectuals have created a handful of institutions, like Nick Bostrom with the University of Oxford’s Future of Humanity Institute, that aim to study such existential risks and set up safeguards in advance. Chief among these risks would be an untethered, malevolent superintelligence.

Consciousness as a Novelty

The worry is that future A.I.s will be conscious (conversely, unconscious A.I.s are less scary). Schneider isn’t so sure. Most of our own processing, she tells Inverse, is non-conscious computation.

“Think of when you were were first learning to drive, how conscious and aware you were of every little thing you did. It was probably that way when you first learned to walk.”

But as you get familiar with a task, it requires less thought and reflection, she says. “Even when you’re a master chess player, rules can become routinized — they become less conscious.” It’s the brain consolidating, conserving its resources, and working toward utmost efficiency. “Routinized, non-conscious computations are vastly quicker than slow, deliberative, conscious processing. So, the brain has learned to be quick by pushing a lot of material out of conscious thought, and reserving consciousness for issues that involve close attention.”

So, at the very least, we ought to question whether advanced A.I.s would even benefit from something like consciousness. Humans might benefit, insofar as conscious A.I.s are more endearing and accommodating. Elderly people, for instance, might want conscious android assistants as caretakers, she says. They’d make for better company.

Joaquin Phoenix as Theodore in the movie 'Her.'

GIPHY

So there may be circumstances in which it would be better to replicate consciousness in a computer. But Schneider says it must be a “cultural decision.”

“It’s something we need to really think about, and we can’t be fooled,” she says. “Just because a robot looks human — or, in the case of Samantha in Her, has Scarlett Johansson’s sexy voice — we can’t assume it’s conscious. We have to actually look at the architecture.”

Beyond that, there’s the matter of whether a conscious A.I. would be more or less dangerous for humanity.

Schneider suggests that either possibility could lead to disaster. We want A.I.s to be compatible with human flourishing, not opposed to it. Despite the now clichéd story, which says conscious A.I.s are what we must fear, we don’t know which possible future would be more problematic. “Military leaders have asked me whether consciousness could make an AGI or a superintelligence more dangerous, because it could become unpredictable,” Schneider says. “There is a chance that we won’t be able to anticipate the changes to a system that could occur if it is conscious.”

Consciousness Will Save Humanity

But there’s also a good chance that consciousness could improve humanity’s odds. “Consider how we feel about animals, or at least how many of us feel about animals,” Schneider says. “The reason it’s terrible to think of someone mistreating a dog, or a cat, is that we believe they’re conscious. Our consciousness — the fact that we’re conscious ourselves, that it feels like something to be us — enables us to see nonhuman animals as deserving protection from abuse.

“Our consciousness causes us to be compassionate,” she says.

In 2016, it might seem a stretch to wonder if machines will ever wonder if humans are or ever were conscious. After all, the best proof any of us has that someone is conscious is that he or she says so.

When the table are turned and machines wonder if humans are truly conscious, they’ll look for what we look for machines experience.

“It could be that A.I. poses a problem of biological consciousness about us — asking whether we have the right stuff for experience.” 

Related Tags