It’s easy to parrot the doomsday predictions made by people who work in the field of artificial intelligence, whose forecasts of smart war don’t seem so far-fetched, especially when proposals to ban killer robots are so widely supported. This is to say nothing of the singularity, when A.I. wrests itself from human control. The most visible advocate for this thinking is entrepreneur Elon Musk, heavily involved with A.I. professionally and personally a huge sci-fi nerd.

But John Giannandrea, senior vice president of engineering at Google, thinks this fear is way overhyped.

“I am definitely not worried about the A.I. apocalypse,” he said onstage at TechCrunch Disrupt SF on Tuesday. “This [mental] leap into, ‘Somebody is going to produce a superhuman intelligence and then there is going to be all these ethical issues,’ is unwarranted and borderline irresponsible because people who don’t understand the technology get very concerned rather than focusing on the positive effects.”

Giannandrea oversees the many machine-learning operations at Google, a company that sees itself as being “A.I.-first,” as he described it. It’s an approach the company has discussed before, which will in practice mean machine learning at the heart of every Google platform.

It’s something we’re already seeing in how Gmail’s “Smart Reply” generates predictive responses for users like the iPhone Messages app does. Next, we may see Google’s Deep Learning put into Google Maps, generating answers for user questions like, “What’s the name of that green bar next to the laundromat again?”

Because of the major investment Google has put into machine learning, it makes a lot of sense that Giannandrea would prefer to focus on the way this technology can be used to help user humanity rather than all that can potentially go wrong.

Though Giannandrea did not specifically name anyone he feels has been feeding into the anti-A.I. hype, his criticisms harken to recent quotes from Tesla and SpaceX CEO Musk. Earlier this summer, Musk stated that A.I. poses “a fundamental risk to the existence of human civilization,” something several researchers were quick to challenge. Musk is also keen on tweeting about the dangers of A.I. — and does so regularly.

Rejecting the idea that A.I. is inherently scary, Giannandrea instead noted that potential ethical problems surrounding its use will be determined by the people operating it.

“Like any technology … powerful technologies have unintended consequences and can be used for good and evil,” he said.

You have to think Musk is crafting a tweet in response.


If you liked this article, check out this video on the nerdy way Elon Musk comes up with the names for his inventions.