The Games A.I. Can Win

Originally Published: 

People have always loved games. Strategy board games date back thousands of years. Abstract rules with visible consequences, clear winners and losers: what's not to love?

You also couldn't build a better way to test artificial intelligence if you tried. A.I. testing often is making mistakes thousands of times until the mistakes are corrected.


Every move in chess presents a different situation. This independence is known as a Markov property, and this continual re-evaluation makes for a perfect computer challenge.

That's what happened in 1996 and 1997, when IBM's Deep Blue played Russian master Gary Kasaparov. Playing twelve games overall, Deep Blue was able to triumph. It used "brute force" techniques, which calculated massive amounts of data as opposed to the machine learning we know today.

Insight Imaging/Moment/Getty Images

Checkers always gets dissed as simplistic compared to chess. But in 1989, the University of Alberta team led by Jonathan Schaeffer had the ambitious goal of defeating the human World Checkers Champion.

"In 1990, Chinook became the first program in any game to win the right to play for a human World Championship," says the project's website. "The program lost the Championship match in 1992, but became Champion in 1994. By 1996, it became clear that the program was much stronger than any human, and Chinook was retired."

And you can play it today!