FREDERIC J. BROWN/AFP/Getty Images
People have always loved games. Strategy board games date back thousands of years. Abstract rules with visible consequences, clear winners and losers: what's not to love?
You also couldn't build a better way to test artificial intelligence if you tried. A.I. testing often is making mistakes thousands of times until the mistakes are corrected.
STAN HONDA/AFP/Getty Images
Every move in chess presents a different situation. This independence is known as a Markov property, and this continual re-evaluation makes for a perfect computer challenge.
That's what happened in 1996 and 1997, when IBM's Deep Blue played Russian master Gary Kasaparov. Playing twelve games overall, Deep Blue was able to triumph. It used "brute force" techniques, which calculated massive amounts of data as opposed to the machine learning we know today.
Insight Imaging/Moment/Getty Images
Checkers always gets dissed as simplistic compared to chess. But in 1989, the University of Alberta team led by Jonathan Schaeffer had the ambitious goal of defeating the human World Checkers Champion.
"In 1990, Chinook became the first program in any game to win the right to play for a human World Championship," says the project's website. "The program lost the Championship match in 1992, but became Champion in 1994. By 1996, it became clear that the program was much stronger than any human, and Chinook was retired."
And you can play it today!
Quinn P. Smith/Getty Images Entertainment/Getty Images
Mortal Kombat is a brutal button-masher where the right combo can mean life or gruesome death. In 2016, grad students at the Israel Institute of Technology created the Retro Learning Environment (RLE), meant to tackle games on the SNES, as well as other systems.
It was a struggle. "Only in the game Mortal Kombat," the students wrote at the time, "a trained agent was able to surpass a expert human player performance as opposed to Atari games where the same algorithms have surpassed a human player on the vast majority of the games."
Artyom Lents/TASS/Getty Images
Malmo wants AI to think holistically, the type of thinking that realizes how crafting works and which objects are needed in a specific situation. In 2017, they were working towards catching pigs.In 2018, it evolved towards collaboration.
By 2019, they were finding rare diamonds.
Francisco Martin Gonzalez / 500px/500Px Plus/Getty Images
After conquering checkers, the University of Alberta team set their ambitions towards poker.
“Poker has been a longstanding challenge problem in artificial intelligence,” said Michael Bowling, U of A professor in 2017. “It is the quintessential game of imperfect information in the sense that the players don’t have the same information or share the same perspective while they’re playing.”
“We train our system to learn the value of situations,” said Bowling. “Each situation itself is a mini poker game. Instead of solving one big poker game, it solves millions of these little poker games." It beat 33 professionals from around the world, so those games paid off.
These days, you don't need a super computer to create a neural network. In 2017, programmer SethBling trained a recurrent neural network to play Mario Kart and win the Mushroom Cup all by himself.
While it's not the most complex AI, based off predictions its learned from watching hours of Bling drive, it was good enough for gold. He made a YouTube video explaining what he did.
VCG/Visual China Group/Getty Images
No game can match the challenge to AI presented by Go. With too many possibilities to brute force like DeepBlue, the AlphaGo team needed their AI to learn and adapt to the many challenges of the ancient game.
After years of training, the AlphaGo system finally took on South Korean Go master Lee Sedol, it won 4 games out of 5. With Go mastered, it's hard to imagine a game an AI can't eventually win.