Gaming

Video: Stunt Actors May Be Replaced By This A.I. Technology One Day Soon

A new artificial intelligence system has developed computer-animated stuntmen that could make action movies cooler than ever. Researchers at University of California, Berkeley have developed a system capable of recreating some of the slickest moves in martial arts, with the potential to replace real-life human actors.

UC Berkeley graduate student Xue Bin ‘Jason’ Peng says the technology results in movements that are tough to separate from those of humans.

“This is actually a pretty big leap from what has been done with deep learning and animation,” Peng said in a statement released with his research that was presented at the 2018 SIGGRAPH conference in August in Vancouver, Canada in August. “In the past, a lot of work has gone into simulating natural motions, but these physics-based methods tend to be very specialized; they’re not general methods that can handle a large variety of skills.

"We’re moving toward a virtual stuntman.

“If you compare our results to motion-capture recorded from humans, we are getting to the point where it is pretty difficult to distinguish the two, to tell what is simulation and what is real. We’re moving toward a virtual stuntman.”

A paper on the project, dubbed DeepMimic, was published in the journal ACM Trans. Graph in August. In September, the team made its code and motion capture data available on GitHub for others to try.

The team used deep reinforcement learning techniques to teach the system how to move. It took motion capture data from real life performances, fed them into the system and set it to practice the moves in a simulation for the equivalent of a whole month, training 24 hours per day. DeepMimic learned 25 different moves like kicking and backflips, comparing its results each time to see how close it came to the original mocap data.

Unlike other systems that may have tried and failed repeatedly, DeepMimic broke down the move into steps so if it failed at one point, it could analyze its performance and tweak at the right moment accordingly.

“As these techniques progress, I think they will start to play a larger and larger role in movies,” Peng tells Inverse. “However since movies are generally not interactive, these simulation techniques might have more immediate impact on games and VR.

“In fact, simulated character trained using [reinforcement learning] are already finding their way to games. Indie games could be a very nice testing ground for these ideas. But it might take a while longer before they are ready for AAA titles, since working with simulated characters do require a pretty drastic shift from traditional development pipelines.”

Game developers are starting to experiment with these tools. One developer managed to use DeepMimic inside the Unity game engine:

Peng is hopeful that releasing the code will speed up its adoption. He also notes that the team has “been speaking to a number of game developers and animation studios about possible applications of this work, though I can’t go into too much details about that yet.”

Machines regularly struggle with complex moves, as demonstrated by the robots playing soccer that softly tumble across the grass instead of completing any high-octane moves. There are signs of progress, as A.I. gets to grips with the complexities of real-world movements and start to correct themselves more like humans.

Perhaps DeepMimic could one day learn a new move in seconds, similar to how Neo learns kung fu in The Matrix.

Read the abstract below.

A longstanding goal in character animation is to combine data-driven specification of behavior with a system that can execute a similar behavior in a physical simulation, thus enabling realistic responses to perturbations and environmental variation. We show that well-known reinforcement learning (RL) methods can be adapted to learn robust control policies capable of imitating a broad range of example motion clips, while also learning complex recoveries, adapting to changes in morphology, and accomplishing user-specified goals. Our method handles keyframed motions, highly-dynamic actions such as motion-captured flips and spins, and retargeted motions. By combining a motion-imitation objective with a task objective, we can train characters that react intelligently in interactive settings, e.g., by walking in a desired direction or throwing a ball at a user-specified target. This approach thus combines the convenience and motion quality of using motion clips to define the desired style and appearance, with the flexibility and generality afforded by RL methods and physics-based animation. We further explore a number of methods for integrating multiple clips into the learning process to develop multi-skilled agents capable of performing a rich repertoire of diverse skills. We demonstrate results using multiple characters (human, Atlas robot, bipedal dinosaur, dragon) and a large variety of skills, including locomotion, acrobatics, and martial arts.
Related Tags