When we talk about artificial intelligence in games, we usually picture smarter or more realistic enemies that don’t come off as mindless automatons. New research, though, is showing how an AI powered by a neural network could revolutionize the way player avatars animate realistically through complicated game environments in real time.
Phase-Functioned Neural Networks for Character Control is a fundamentally new way of handling character animation that will be presented at the ACM’s upcoming SIGGRAPH conference this summer.
In most games, character animation is handled through “canned,” pre-recorded motion capture.

This means an average player will see precisely the same motion cycled repeated thousands of times in a single play-through. “Our system works completely differently,” University of Edinburgh researcher Daniel Holden told Ars in a recent interview.
“We start by making a huge database of animation data,” he said. “And we use machine learning to produce a system which maps directly from the user input to the animation of the character.
So, instead of storing all the data and selecting which clip to play with, [we] have a system which actually generates animations on the fly, given the user input.”
Read 13 remaining paragraphs

Leave a Reply