Inferred movements — New research from the University of Edinburgh and Adobe Research could be a huge leap forward in allowing creating more natural in-game movements. The research uses deep neural networks to guide animated characters by inferring movements, thereby allowing characters to actually learn how to interact with various objects in a game world.
In order to create its lifelike motion, the neural network must first study a database of motions captured from a live performer on a soundstage. Where this method really becomes interesting, though, is in filling in the gaps of knowledge between pre-defined movements.
Neural network magic — Most animated characters — especially those in video games — have the ability to carry out a programmed set of single movements that have been captured and coded from human performances. This process is inevitably limiting: developers can plan out a thousand different movements for a character, but then that character can never complete movements outside that set.
It’s like this: you can show a character how to sit, and you can show the character how to walk—but the in-between portion, where you slow down and turn your body before sitting is trickier. This is where neural networks come in: they allow the character to learn the in-betweens without those precise motions being captured and programmed.
Big news for big files — If successful, this technology could change the standard mode of character animation for the better. The potential implications are pretty huge: for example, using neural networks actually cuts down on the amount of data needed to run a game, because characters can learn movements instead of each one needing to be pre-programmed. This is especially pressing now that game files are steadily increasing to massive sizes with no limit in sight.
Neural network animation technology could easily be ported to other sectors, too, such as in scientific visualizations. The research is being presented next month at the ACM Siggraph Asia conference.