When you’re an A.I. researcher at Google, even your days off are filled with neural nets. Mike Tyka is a Google scientist who recently helped create the company’s DeepDream venture, but this week he posted details of a personal project that could someday make DeepDream seem primitive. That famous program works by basically blending together elements of other pictures, and then modifying that collage, but Tyka’s new approach takes the much more difficult and potentially rewarding path: teaching an A.I. to create all-new portraits from scratch.

“I don’t mind if the results are not necessarily realistic but fine texture is important no matter what even if it’s surreal but [high-resolution] texture,” Tyka commented Tuesday on his blog.

How It Works

The approach uses “generative adversarial networks” (GANs) to refine the A.I.’s abilities over time. GANs are neural networks that work in opposition to one another; one GAN draws a picture from scratch (the generative part) and another attempts to tell whether the picture is real or A.I.-generated (the adversarial part). The GANs will eventually trend toward better and better looking portraits, as one learns to trick its adversary network into misidentifying its creations as real. When this happens, the falsifying network learns from its mistake, and gets better at picking out false pictures in the future. In this way, both the generative and adversarial abilities of the system progress together, and one always keeps driving evolution in the other.

That evolution, in the face-creation network, comes from a basic understanding of how faces look, detail by detail. We couldn’t currently do this sort of GAN analysis of, say, pictures of bedrooms, because bedrooms don’t conform to a reliable pattern of construction; sometimes beds go left of dressers, but mouths never go above eyes. The A.I. learns very little from any one photo, but by taking many together it can begin to tease out some of the basic principles of what makes faces look like faces.

There’s the simple, Picasso-level stuff that dictates where features go, and the all-important blending of textures to make the different elements look like one cohesive whole. Tyka mentioned in the post that even when shooting for unrealistic art pieces, this texture work is still crucial to a pleasing final product.

In the case of this lady’s hair, we can see that the lack of resolution produces a sort of impressionistic idea of what hair looks like, because the A.I. has never seen hair in enough detail to see the fine distinctions between strands. That’s simply because a high-enough resolution dataset isn’t available right now.

One interesting implication of the fact that this is a generative system is that it can develop its own style. Training GANs with the same dataset but slightly different methods of analysis could lead to noticeably different approaches to the sort of pixel painting it does. Eventually, of course, they should both theoretically converge on photo realism if they keep getting fed more and more real photos, but at least for the time being A.I. brush strokes could be an intriguing trend to watch.

Tyka has previously sold DeepDream’s artistic acid trips for tidy sums — we’ll have to wait and see whether people will be willing to pay for portraits of people they’ve never seen before.

Check out a few more examples of the A.I.’s creations (no, that’s not a photo of Prince) below.

Photos via Mike Tyka