Scientists at the University of Alberta are cracking away at the complexities of artificial intelligence with their new “DeepStack” system, which can not only play a round of poker with you, but walk away with all of your money.
This new technology builds upon the legacy of systems like IBM’s Deep Blue, which was the first program to beat a world champion, Gary Kasparov, at chess in 1996. As Michael Bowling, co-author of the research and leader of the Computer Poker Research Group at Alberta, puts it: poker is the next big step for designing A.I.
In a game of Heads Up No Limit poker, DeepStack was able to win against professional poker players at a rate of 49 big blinds per 100.
”We are winning by 49 per 100, that’s like saying whatever the players were doing was not that much more effective than if they just folded every hand,” Bowling tells Inverse. “To put it into perspective, a pro player is trying to win by a rate of about five big blinds per 100. If they can make that rate than they are good enough to make a profit and a living playing poker. And we were beating that by nearly ten times.”
Simply put, DeepStack is impressive, and it’s getting better. There are many facets to the program that make it so well adept for this complex game. First, it simulates recursive learning, which is a back-and-forth exchange of determining how much your opponent thinks you know versus how much they think you think you know.
“If my opponent thinks I have really good cards and thinks I know about his cards, then he is going to fold,” he said. “This is recursive reasoning, it’s how to think through ‘what does everybody actually know’ and how to make decisions accounting for all of that.”
Once it is able to fool you into believing it has a better hand, DeepStack starts strategizing. In the past, engineers have tried to create a program that can see to the end of the game and calculate its odds of winning. But, DeepStack is trained to only see a few decisions ahead and act on its feet, so to speak.
“Ideally, the program would figure out if it’s going to win or lose and find a path through that situation to win,” he said. “But, that situation base is huge and there’s no way it could look all the way to the end of the game from the very beginning, it’s just too far away. So, DeepStack only looks a few decisions ahead and asks itself if a guess might be worth it and I liken that guess to intuition.”
The intuition is the final step, it is the intelligence, in “artificial intelligence.” The program has a deep neural network and is able to learn as it goes and store what it learned so that it can pull from its memory when playing a new game.
“It learns the value of these poker situations and generalizes it to situations it hasn’t seen before. And that’s what’s powerful about deep learning, it can summarize its knowledge and make good choices looking ahead.”
This may seem like the final frontier for Bowling and his team, but there is still a long road ahead. They want to design the program to play with multiple players, which would expand the chance factor exponentially. The team also wants it to be able to apply all of its knowledge to learning an entirely new game with a completely different set of rules than poker.
“If you could really have systems that don’t know the rules, that can deal with multiple players, and handle not knowing everything, then that would really be getting into the holy grail of A.I.”
Bowling believes this future isn’t too far away, and sees many real world applications for DeepStack’s capabilities. From monitoring flight schedules to reduce the chance for terrorism acts, to designing a way for honor systems to truly work for public transportation worldwide, to the simple idea of your phone helping people make smart financial decisions. The possibilities are endless.
“Our everyday decisions are never just ‘yes’ or ‘no.’ You never have all the information you need to make a decision. It’s a lot of guessing and having an A.I. that can evaluate our choices for us would become essential.”