Remember that Bizarro World “Myidol” app that went viral this spring because of its ability to use facial recognition software to make a digital avatar that looks and moves exactly mostly like you do? That’s about to become a reality… sort of.

Researchers at Stanford have figured out how to transfer one person’s facial expressions onto another person’s in real time via video.

The model shown in Stanford’s demonstration video just uses a consumer-grade PC and depth camera for each actor — meaning you don’t necessarily need fancy equipment to execute this tromp l’oeil, you just need some complex algorithms.

The researchers developed a new real-time algorithm that takes high-quality visuals of each participant’s faces and uses them to transpose the expressions from a “source actor” (the person providing the facial expressions and/or speech) to a “target actor” (the person whose face will be manipulated according to what the source actor does). The source actor’s visual information goes through the program and gets rendered on top of the target actor’s video stream, so it looks like the target actor is smiling, talking, sticking his/her tongue out, or whatever else comes to mind.

Thanks to a series of reference points mapped out around the face (similar to the Myidol app), the types of visual information measured by the system include face shape and features (like your eyes, nose, mouth, and even wrinkles) as well as facial texture and depth. The program then matches the source actor’s reference points to the target actor’s and superimposes those expressions onto the target for a hyper-realistic video feed.

Since this system is still in the works (and pretty rudimentary equipment was used), the end results are still a little uncanny — like the semi-disturbing faux teeth the program superimposed inside the target’s mouth so there’s not a gaping black hole when it’s “open” (see above)… as well as the sort of blank look in the target’s eyes. But those are small fries compared to what cool things the development of this program means for us.

Here are a couple ideas for what we could use this tech for:

  • You’ve got an important job interview on Skype, and you’re not dressed for success. Just use a fancy-looking stand in while you provide the facial expressions and killer interview answers.
  • You’re watching a live-streamed lecture from a top professor in Germany — the only problem is that you don’t speak German. No worries, thanks to a real-time translator using this software, you can watch the prof deliver her speech in your native tongue without any atrocious dubbing delay in the visuals.

Watch the expression-swap in action:

Photos via Screenshot via YouTube

Tatiana is a Brooklyn-based writer, editor, and photographer from Minneapolis. Her work has also appeared in the Village Voice, City Pages, and LA Weekly. Tatiana is the Weekend News Editor for Inverse.

What's Next