Science

"Was God a Hacker That Put Us All Together?"

How certain are you that your fellow humans are conscious?

Unsplash / Sean Brown

The best artificial intelligences today can perform narrow tasks pretty well. If we want to solve an equation, drive on a highway, identify skin cancer, or win at Go, we can enlist A.I.s to do it for us. But, depending on what you think about consciousness, there could soon be a day when A.I.-powered robots look, act, and feel sentient — conscious. But at what point do robots that look, act, and feel conscious become actually conscious?

To truly raise the question, A.I. researchers will need to make some significant headway. A.I.s will need to go from narrow intelligence to broad intelligence. It won’t suffice if a machine can merely get a perfect SAT score, which is itself a long ways off. At the very least, A.I.s will need to perfect broad intelligence and pass the Turing test — fool a person into thinking it’s a human — and roboticists will need to perfect their imitative art. But in the meantime, we can wonder.

The Allen Institute for Artificial Intelligence (AI2), which Microsoft co-founder Paul Allen started in 2013, is chipping away at the former goal — broad intelligence. Its four projects — Aristo (learning), Euclid (mathematics), Semantic Scholar (scientific literature search), and Plato (computer vision) — are designed to produce A.I.s that can read, learn, reason, and know. Oren Etzioni, AI2’s current CEO, is a straight-shooting realist about A.I., which, these days, is rare. He’s not afraid of malevolent A.I., and he doesn’t expect paradigm-shifting superintelligence anytime soon.

Hanson Robotics, the Westworld-esque android laboratory that David Hanson founded, is chipping away at both goals. Its robots — Han, Sophia, Albert Einstein, Philip K. Dick, and others — are, in effect, advance looks at the future we may soon behold. Hanson’s robots combine animatronics and A.I., and their realism is already striking. Stephan Bugaj, who oversees personality design at Hanson Robotics, thinks we will get truly lifelike robots in our lives, and relatively soon. If we’re lucky, he thinks we may get sentient robots in our lifetimes. Whether or not we recognize them as such is another matter.

“On the physical machinery level, we’re probably 20 years out, 20 or 30,” Bugaj says. “In terms of cognition … I wouldn’t be surprised if we lived to see that day. And I also wouldn’t be surprised if we didn’t.”

Unsplash / David Marcu

A.I. Today

One who hopes to develop artificial intelligence, must first understand natural intelligence — or, if not understand, at least examine. Etzioni is well-situated to compare the state of these arts. Along with AI2, Paul Allen founded the Allen Institute for Brain Science. Etzioni knows how much we know about the brain, and therefore knows how inadequate current A.I.s are.

“The pundits often describe deep learning as an imitation of the human brain,” Etzioni wrote for Wired. “But it’s really just simple math executed on an enormous scale.” It’s true that neural networks and deep learning architectures are loosely inspired by neurons, insofar as there’s an input-output, but the Allen Institute’s neuroscientists routinely highlight the magical characteristics of real, human brains. They show Etzioni that the brain, even “on a single-neuron level — let alone hundreds of billions of synapses, interneuron connection — is much, much more complex than what happens in the neural networks.”

Actual neurons, let alone brains, are evolutionary masterpieces, and we’d be hubristic to think we’ve come close to honest reproductions. “I don’t think anybody, including the people who investigate this heavily, would argue that a processing unit in a neural network is anything like a neuron, or even a part of a neuron,” Etzioni says. “They’re complex chemical entities: There’s ion differentials, there’s electrical things going on — it’s just super complicated.” For the most part, the actual goings-on remain obscure, but, as a result of those obscure processes, brains are the most data and energy efficient machines in existence, bar none. “They say, like, ‘The human brain is powered by a burrito.’ The neural networks require massive data centers, GPUs, et cetera.”

But it seems at least possible that, one day, brain and A.I. research could converge. “People are excited about that, and people are drawing inspiration from the brain as much as they can. It’s just that, if you talk to the neuroscientists about neural networks, they laugh politely, and say, ‘More power to you — give me a call when you grow up.’”

Wikimedia Commons / C. Allan Gilbert

A.I. Tomorrow

Those who believe that A.I.s will one day become conscious, or sentient, believe that the human mind is nothing over and above its constituents. Philosophers of mind call these people physicalists; physicalists believe that we will one day reduce consciousness to physical things, like atoms, and they most definitely do not believe in anything resembling souls. Etzioni credits Douglas Hofstadter, author of Gödel, Escher, Bach, with getting him hooked on A.I., but he also thinks Hofstadter proactively solved the mystery of consciousness.

Consciousness, to Hofstadter, is an epiphenomenon, something that emerges from a complex system as a by-product. “This stuff is happening — whether it’s math, or atomic activity, molecular activity — and consciousness, somehow, emerges from that,” Etzioni explains. A car’s transmission, on its own, cannot get you from point A to point B, but — when hooked up to the right parts, and organized in the right way — the whole system can. Much is the same with the brain’s parts and consciousness, the physicalists believe.

Still, Etzioni thinks that “one of the most profound intellectual problems, right up there with the origins of the universe,” is how, specifically, consciousness “emerges,” and, secondarily, how we can replicate such emergence. “It could easily take us a hundred-years-plus to figure it out,” he says.

“But I don’t think we’re going to look there, and say, ‘Oh my gosh: It’s something beyond atoms.’”

Wikimedia Commons

Consciousness is in the A.I. of the Beholder

If we ever do manage to produce an A.I. that A) is not powered by the “wetware,” as it’s known (i.e. not powered by an actual brain), and B) appears, to all human observers, to be conscious, we’ll need to decide whether or not we actually call it conscious. To Bugaj, though, there’s no difference between apparent consciousness and actual consciousness. “The only reason I know that you’re sentient is because I take it on faith that the machinery that we have, that makes us come up with words and ideas and so on, generates sentience,” he says.

“I tell my daughter: ‘You’re alive,’” Bugaj says. “So, did I program her to say that she’s alive?”

Every one of us is a black box: We cannot justifiably claim to know that other humans are conscious. But we tend to act as though most people are. The particulars of how they are, remain — for now, if not forever — murky.

“Was God a hacker who put us all together? That’s a theory that people have; it’s not the craziest of all theories,” he says. “I believe we evolved, but we’re evolved bio-machines. We have some kind of program. We don’t know what it is.”

For Bugaj, then, it would be inconsistent to assume that a machine was not conscious if it generally seemed conscious. “If the android is a black box, and it appears to be sentient, I guess it is,” he says. Only the android will truly know if it is conscious.

“It’s up to us to decide if we believe them.”

Related Tags