Today’s research project is from the Carnegie Mellon Graphics department, and involves creating responsive characters from motion fragments. There are two main innovative ideas behind this paper:
Gathering traces from the gameplay helps model player movement.
Using reinforcement learning helps predict future motion transitions.
Combined together, these two ideas help the animation system understand what’s going to happen ahead of time to select better animations, without using too much computation time (e.g. for doing full planning).
View or download the movie (AVI, 151 Mb).
Here’s the abstract:
In game environments, animated character motion must rapidly adapt to changes in player input — for example, if a directional signal from the player’s gamepad is not incorporated into the character’s trajectory immediately, the character may blithely run off a ledge. Traditional schemes for data-driven character animation lack the split-second reactivity required for this direct control; while they can be made to work, motion artifacts will result. We describe an on-line character animation controller that assembles a motion stream from short motion fragments, choosing each fragment based on current player input and the previous fragment.
By adding a simple model of player behavior we are able to improve an existing reinforcement learning method for precalculating good fragment choices. We demonstrate the efficacy of our model by comparing the animation selected by our new controller to that selected by existing methods and to the optimal selection, given knowledge of the entire path. This comparison is performed over real-world data collected from a game prototype. Finally, we provide results indicating that occasional low-quality transitions between motion segments are crucial to high-quality on-line motion generation; this is an important result for others crafting animation systems for directly-controlled characters, as it argues against the common practice of transition thresholding.
Download the paper (PDF, 1.2 Mb):
Responsive Characters from Motion Fragments McCann, J. and Pollard, N.S. ACM Transactions on Graphics, Vol 26.
Here’s a quick assessment of how easy it would be to integrate the technology into upcoming games.
- Applicability to games: 8/10
- The video does a poor job of selling this technology, but the idea of predicting player movement is useful. This should be combined with collision queries (which most games do already) for better looking animations.
- Usefulness for character AI: 2/10
- The AI typically knows what it wants to do ahead of time, so there’s no need to predict it.
- Simplicity to implement: 5/10
- Most games have blend-trees and motion graphs already, but understanding reinforcement learning is necessary to model and predict the player’s behavior.
Interacting with the motion controller in real-time.
Which situations do you think this technology is particularly useful for?