Even Google’s CEO thinks AI needs regulation

“We need to be clear-eyed about what could go wrong.”

Sundar Pichai, CEO of Alphabet and Google

Bloomberg/Bloomberg/Getty Images

There’s no better-equipped company to develop artificial intelligence than Google. Effective AI requires an enormous volume of data and monstrous raw computing power. Google’s got both in spades. But we all know what comes with great power, and even Google CEO Sundar Pichai recognizes it might be prudent for regulators to impose a few limits on AI before we accidentally build Skynet.

Pichai sounded the call for AI regulation in a letter (that reads like a press release) published in the Financial Times, arguing that history is full of examples of “how technology’s virtues aren’t guaranteed.” Are we to take Pichai’s call at face value, or is this a preemptive strike so that when Google inevitably outpaces regulators’ ability to regulate its AI initiatives, the search giant can argue it tried to do the right thing?

We’ve already seen Google’s AI used to spot breast cancer more reliably than human diagnosticians can, and it’s being used to create near real-time weather prediction models. It’s also used neural networks to beat human Go players, identify individual pets in users’ Google Photos libraries, and help Gmail users compose automatic responses in their own style. These are not the use cases we need to worry about, though.

What could go wrong? Everything — “[W]e need to be clear-eyed about what could go wrong. There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition,” Pichai says, adding that though there is some work underway, “there will inevitably be more challenges ahead that no one company or industry can solve alone.” Companies like Google “cannot just build promising new technology and let market forces decide how it’ll be used,” he continued.

Pichai’s quite right, of course. AI is, as he argues, “too important” not to be regulated. But neither Google publishing its own AI principles to help direct its development of the technology, nor its encouragement of other companies to do, likewise, is enough.

Who regulates the regulators? — As big tech has taught us repeatedly, expecting businesses to self-regulate is ludicrous. Without regulation, companies will focus on only one thing: maximizing shareholder value. The only way to change that is with legislation that includes punitive measures that threaten to erode that value when ignored.

Pichai is making the right noises, sure. But governments need to make the right moves. And given it's home to the bulk of big tech businesses and a litmus test for the rest of the world, the U.S. ought to lead the charge.