Over the last few weeks, an unknown Go player ripped through the online leaderboards, defeating some of the world’s top players. Called “Master(P),” this anonymous knave lived up to its handle, making competitors say, “Ugh.”
Players suspected that this mystery player who came out of nowhere and whooped everybody’s butt at Go was likely an A.I., according to Nature News, but nobody knew for sure. Then on January 4, Demis Hassabis, the co-founder and CEO of Google’s DeepMind AI, confirmed these suspicions and announced that Master(P) was indeed a new version of AlphaGo.
After AlphaGo’s historic victory against 18-time world champion Lee Sedol in March 2016, DeepMind proved that it had created something the world had never seen before. Go requires a different kind of thinking than other games do. It has a nearly infinite number of possible configurations. Therefore scientists and Go players have long seen defeating a human at the game as a major hurdle for an artificial intelligence.
AlphaGo defeated Sedol 4-1 in their five-game series, but researchers at DeepMind must not have been satisfied with that one loss. Aiming for perfection, they pitted their program against more and more players around the world. By putting AlphaGo into online games, they gave it the opportunity from more players, studying their moves and incorporating new strategies. After all, an A.I. learns from experience, just like humans do.
Beyond the opportunity to learn from more opponents, AlphaGo seemed to benefit from the rapid pace of online games, confusing slower, more methodical human players with what looked like irrational moves. “Strategies that appear flawed on paper to game experts can actually disorient human masters,” reported TechCrunch.
Hassabis took to Twitter on January 6 to announce that, “we’ve been hard at work improving AG, delighted to announce that more games will be played in early 2017!” So while 2016 saw a monumental step forward for artificial intelligence, 2017 is likely to see AlphaGo slay even more human players.