Here's How Google's DeepMind Is Trying To Give Imagination to A.I.

The lines between human and machine are more blurred than ever. 

University of Oxford

Google’s top artificial intelligence team has just created a machine-learning A.I. with a distinctly human quality: “imagination.” At least that’s the word the company DeepMind is using to describe the ability of their new algorithm to predict the effects of its actions before executing them.

Two upcoming research papers written by DeepMind researchers detail the development of an “Imagination-based Planner” for future A.I. programs — ostensibly as a way to allow autonomous computer agents to presciently assess likely outcomes to a given situation and then act accordingly. The researchers argue that this trait gives an A.I. agent the capacity to develop a wide variety of plans and strategies for solving a particular problem, evaluate them against one another, and choose the one that will most likely lead to a successful outcome.

“Imagining the consequences of your actions before you take them is a powerful tool of human cognition,” DeepMind researchers wrote in a blog post. “If our algorithms are to develop equally sophisticated behaviors, they too must have the capability to ‘imagine’ and reason about the future.”

Prediction-based A.I. is nothing new. Programs have been taught how to efficiently evaluate different strategies, but it’s proven challenging to train A.I. to construct a plan. For example, DeepMind’s most notable success story is an A.I. program that has beaten several different world champion AlphaGo players. That program was so successful precisely because it understood how most efficiently to execute a narrow set of actions bound by a narrow set of rules. Throw a more open scenario, however, such as designing transportation routes for an urban rail system, and it would falter.

“The real world is complex, rules are not so clearly defined and unpredictable problems often arise,” the researchers wrote. “Even for the most intelligent agents, imagining in these complex environments is a long and costly process.”

The new A.I. agents outlined in the pair of new papers learn to extract information relevant to future decision-making and jettison nearly everything else that’s considered irrelevant. Armed only with information about what the future might be, the A.I. programs are able to construct solutions and, more importantly, compare which ones would provide the most desirable outcomes or rewards, and how.

An A.I. agent playing the spaceship game. The red lines indicate executed trajectories, while blue and green designate imagined ones.

Google DeepMind

The DeepMind team tested these agents by tasking them with the puzzle age Sokoban and a spaceship navigation game. Both require reasoned planning in order for the user to move forward, and in both games, the A.I. agents armed with “imagination” performed better than their baseline counterparts.

This is just the first step towards developing an A.I. system that can be thrown into an environment and figure out its own way forward, but it emphasizes the increasingly blurred line between human thinking and A.I. cognition. Creativity and imagination, two of the traits we think of as most uniquely human, may not be exclusive anymore.

Related Tags