Google’s DeepMind subsidiary just made a massive step toward artificial intelligence that can teach itself better than humans can. According to research published Wednesday, its new AlphaGo Zero bot learned how to play the Chinese game Go with zero human input — and it then proceeded to dominate its human-taught predecessors.
As reported in the latest issue of Nature, the team at DeepMind looked to improve on their last A.I. creation AlphaGo. The earlier bot had learned to play Go by observing thousands of actual human games to understand the often intricate strategy involved. AlphaGo Zero, on the other hand, was just left to itself to play against itself, initially making moves at random and learning what worked and what didn’t.
In just three days, it was already skilled enough to beat AlphaGo in 100 straight games, not losing once to its predecessor. But that was just the warm-up:
After 40 days of self training, AlphaGo Zero became even stronger, outperforming the version of AlphaGo known as “Master”, which has defeated the world’s best players and world number one Ke Jie.
Over the course of millions of AlphaGo vs AlphaGo games, the system progressively learned the game of Go from scratch, accumulating thousands of years of human knowledge during a period of just a few days. AlphaGo Zero also discovered new knowledge, developing unconventional strategies and creative new moves that echoed and surpassed the novel techniques it played in the games against [Go masters] Lee Sedol and Ke Jie.
In their write-up, the team at DeepMind say they hope A.I. like AlphaGo Zero has the potential to do things previously thought impossible, pushing the boundaries of human imagination and tackling incomprehensibly complex problems “such as protein folding, reducing energy consumption or searching for revolutionary new materials.”
In the meantime, maybe see if the A.I. wants to play Monopoly instead.
If you liked this article, check out this video of Bob Ross (and his happy little trees) as seen by Google’s DeepDream.