Elon Musk has not been shy about his concerns over how artificial intelligence could get out of control. He has gone so far as to say that A.I. is a greater threat than nuclear war, and he has called for government regulation of the technology. With that in mind, Musk may not be happy to hear that the Trump administration on Tuesday released a list of principles for regulating A.I. that recommends the federal government not get in the way of its development.
“Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth,” the document reads. “Agencies must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits.”
The Trump administration has been known for its deregulation of everything from environmental rules to food safety rules. This list of principles is one of the rare times the administration has proposed any kind of vision for the tech industry, and it would appear it wants our European allies to follow the same code.
“Europe and our allies should avoid heavy handed innovation-killing models, and instead consider a similar regulatory approach,” the White House Office of Science and Technology Policy said in a statement. “The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.”
The E.U. seems to be considerably more worried than the White House about problems that could be caused by the development of A.I., as it released a set of guidelines for developing ethical A.I. last year. The guidelines cover everything from how to ensure A.I is not biased against women or people of color to making sure A.I. is not able to reject orders from a human.
The White House claims it wants to promote “innovation” and “growth,” but not regulating this developing technology enough could become a real problem.
As Musk has warned, with the rate at which this technology is developing, it might not be long before it is able to outperform humans in a variety of ways, and we don’t know what would happen next once that kind of super-intelligence has been unleashed.
“By the time we’re reactive in A.I., regulation’s too late,” Musk once said while speaking at a meeting of the National Governors Association. “Normally, the way regulation’s set up, a whole bunch of bad things happen, there’s a public outcry, and then after many years, a regulatory agency is set up to regulate that industry. There’s a bunch of opposition from companies who don’t like being told what to do by regulators, and it takes forever. That in the past has been bad but not something which represented a fundamental risk to the existence of civilization.”
As Oxford philosopher Nick Bostrom wrote in his 2014 book Superintelligence, A.I. that is smarter than humans could become something humans are not capable of controlling.
“Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences,” Bostrom wrote. “Our fate would be sealed.”
The White House’s first significant initiative to promote A.I.—what it calls the “American AI Initiative”—was launched in February of last year. Trump signed an executive order at that time to dedicate some federal funding to developing A.I. The executive order also called for international standards to be set for how A.I. is developed, and it seems these new principles outline the direction this administration thinks the United States and its allies should go.
While it’s a good thing when the government doesn’t get in the way of innovation, the government can also play a role in preventing emerging technology for causing unintentional harm. We’ve never developed a system that could become more intelligent than a human being, and if we don’t set up the proper regulations before A.I. reaches that level, the consequences could be severe.