“I’m very close to the cutting edge in A.I.,” Elon Musk said Sunday while giving a talk at SXSW, “and it scares the hell out of me.”
The SpaceX CEO confessed he sees A.I. as a bigger threat than nuclear war, citing its much larger capabilities and potential results, should the innovation continue to go unregulated.
“It’s capable of vastly more than anyone knows, and the rate of improvement is exponential,” he said.
Musk claims to not be an advocate of too much government regulation, with A.I. being a notable exception where he finds the lack of oversight more reckless than most people realize. Given its potential, it poses a “serious danger to the public” should the technology be misused or misunderstood. He proposed a public body with both scientific insight and a public oversight to ensure A.I. is developed safely.
“Mark my words: A.I. is far more dangerous than nukes,” Musk said.
While artificial intelligence is not usually compared to the impending threats of bad foreign policy, Musk was quick to contemplate the results of a third world war or modern dark ages, one in which the “seeds of human civilization” are threatened. If A.I., as he claims, poses a larger threat than nukes, “we have to figure out a way to ensure the advent of super intelligence is one that is symbiotic with humanity.”
This isn’t the first time Musk has warned audiences about the dangers of A.I. He called for regulation at last year’s National Governors Association meeting, calling the technology a “fundamental risk to the existence of human civilization.” He also joined other experts in a joint letter to the U.N. urging the ban of killer robots.
While at SXSW, Musk also made a noteworthy comparison that in the case of warheads, not anyone can just build a nuke if they wanted. Likewise, regulations and oversight must be in place to ensure people develop A.I. safely. If the public doesn’t want nuclear innovation in reckless hands, it shouldn’t want the same for A.I.