Gaming

Inside the One Bot A.I. Rebellion From Hell

A two-person team gave Facebook and Intel a run for their money at the coolest A.I. competition ever.

Working with only the time and resources that they could afford over a Pittsburgh summer, Carnegie Mellon computer science students Devendra Chaplot and Guillaume Lample got in a shootout with Facebook and Intel. Their hired gun? A homemade A.I. capable of absolutely destroying the classic 1993 computer game Doom. With the game’s pixelated visual output serving as the only input, Chaplot’s and Lample’s software bot “thinks” as it moves around its virtual environment, deciding which Cyberdemon, shotgun guy, or Cacodemon to reduce to a pile of crimson goo.

Their software plays the game as a person would, except it’s much, much better than a human player.

The duo named their bot Arnold, which took second place at this year’s VizDoom, a competition that is perhaps the raddest gamification of hardcore A.I. research. Just as chess software plays against other chess software in the World Computer Chess Championship, VizDoom’s artificially intelligent contestants enter to shoot, zap, and blow each other up within Doom. They are self-governing software agents, sitting in the driver’s seat, and boy, do they love rocket launchers.

Here’s what it looks like when Arnold sweeps a room with a shotgun, a gatling gun, and several other guns:

Asserting their chess superiority over humanity decades ago, computers are already better than people at a variety of games. Lately, it’s Google leading the charge. The company trained its DeepMind software to expertly play the Atari, and its neural network AlphaGo beat Lee Sedol, flesh-and-blood professional Go player, earlier this year. That victory was an especially mind-splitting milestone. The ancient Chinese game had long been thought to be uncomputable due to the vast number of variations that can emerge during play, but technology competitively edged in on this uniquely human pursuit.

Guillaume Lample and Devendra Chaplot

So how should software play Doom if it’s going to be the best there is? Chaplot and Lample defined their task as two components to optimize against. One was navigation — how to most effectively explore the Doom map, collect items, and find enemies. The second component, of course, was combat. “I was working on fighting the enemies and [Devendra] was working on training the bot to explore the game, Lample explains to Inverse. “It was two different networks, and each of us was working on one of them.”

Learning from setbacks and failures over time, Doom software bots like Arnold evolve their neural networks to be more robust decision-making machines. Doom is an especially advantageous A.I. laboratory because it lets the artificial intelligence encounter human play; Lample suggests that the A.I. gets better playing against humans than against other software bots. “[In other situations, you] make the A.I. play the game, then you make the human play the game, but they don’t play against each other. This is the difference with Doom, because the human can play against the A.I. When a software bot can totally crush its human competitors, it’s a clearer indication of its ability to thrive against its virtual opponents.

Arnold was so great at Doom because he developed a strategy that the other bots didn’t. Wojciech Jaskowski, a computer scientist at the Poznan University of Technology, and the main organizer of the VizDoom competition, reduced it to a simple trick” — Arnold “was ducking most of the time and thus was hard to hit.”

This critique ignores the essential fact that Arnold not only discovered crouching was possible, but determined it to be advantageous. Over time, the software’s neural feedback was letting it know that crouching was the way to go. It learned something the other bots hadn’t, and capitalized on its advantage. Why should intelligence like that be called artificial?

“It turns out our agent learned to duck, minimizing the surface area where it was vulnerable, significantly decreasing the chance of death,” Chaplot tells Inverse. While Arnold’s video game life-hack was advantageous enough to make his kill-to-death ratio the best in the competition, it slowed him down quite a bit. A good kill-to-death ratio isn’t enough to win in VizDoom, and it perhaps cost the team the number-one spot.

While they were bested by A-list teams from Intel and Facebook in two main competitions, it’s their home-brew effort that’s painted them as darlings of the competition.

“I think it’s impressive what they’d done over the summer by themselves,” says CMU computer science professor Ruslan Salakhutdinov.

Devendra Singh Chaplot in his workspace.

With this notable victory in their pockets, and new attention stirred up for their work, Chaplot and Lample will next teach computers how to play video games much more complex than Doom. Though the 23-year-old landmark computer game is loaded with nostalgic whimsy — it’s a relic. Your character can’t even jump. Contemporary video games significantly outperform Doom, with respect to graphics, complexity, and realism. What would it say of an artificially intelligent piece of software to be a gifted Grand Theft Auto V player?

“We would like to apply this to some more advanced first-person shooter games in the future,” stated Lample. “We could apply this to games that are much harder to play than Doom. We are basically trying to figure out how to access these games to apply our results.”

Don’t expect to see A.I. interactions with modern video games yet. There are sure to be a few intermediary steps before Arnold learns to steal a virtual car in a virtual city.

Maybe Quake’s next.

Related Tags