Science

Google's A.I. Learns How to Encrypt Itself

by Nathaniel Mott
Getty Images / Jack Taylor

Google researchers have published a paper about three different A.I. — Alice, Bob, and Eve — that encrypt their communications so they can message each other without anyone listening in. Besides stoking fears of A.I. secretly collaborating on some kind of robot uprising, the accomplishment shows that A.I. can learn without relying on humans.

The researchers point out that neural networks (A.I.) aren’t historically proficient with cryptography. So they tasked Alice with converting some text into gibberish, sending it to Bob, and making sure Eve couldn’t read it. They didn’t force the A.I. to use any particular encryption method — they simply gave it a task, and told it to figure out a way to make it work.

Alice and Bob didn’t start out with much luck: They struggled to even communicate with each other. Then they learned how to message each other, but Eve learned alongside them. It took 10,000 attempts for Alice and Bob to counter Eve’s progress; by the time they reached 15,000 attempts, they managed to communicate with each other and stump Eve.

Let's hope Alice and Bob don't look like this.

Getty Images / Paul Gilham

The A.I. also learned something more valuable: deciding what data should be kept safe.

“Knowing how to encrypt is seldom enough for security and privacy,” Google’s team explained. “Interestingly, neural networks can also learn what to encrypt in order to achieve a desired secrecy property, while maximizing utility. Thus, when we wish to prevent an adversary from seeing a fragment of a plaintext, or from estimating a function of the plaintext, encryption can be selective, hiding the plaintext only partly.”

This doesn’t mean the system devised by Alice and Bob should be used to secure anyone else’s communications. The very nature of neural networks makes it difficult to tell exactly what they did to protect their messages, and their limitations mean that an encryption method that Eve couldn’t break might still be vulnerable to human interception.

But the research still shows that A.I. is getting better at teaching itself. This complements other efforts to make A.I. smarter and, if they do create strong encryption, to make sure people are safe from hackers of the future and their ultra-powerful quantum computers.

All that is still a long way off. Meanwhile, Google’s A.I. continues to amaze by mastering Go and improving translations, among other things.

Google’s researchers said the next step for Alice, Bob, and Eve could involve other cryptographic protections, like pseudorandom number generations or steganography. They don’t expect A.I. to ever become a codebreaker, but they could at least help us keep our communications private, and analyze the metadata associated with digital messages.

Related Tags