Article
files/i3d07

Parametric Motion Graphs

Alex J. Champandard on September 20, 2007

This week’s Thursday Theory post kicks off a series of white paper reviews from the Symposium on Interactive 3D Games and Graphics 2007, specifically relating to character animation. If you missed it, you can also see AiGameDev.com’s Siggraph 07 coverage from a game AI perspective.

This first submission is from the University of Wisconsin-Madison and introduces techniques for motion graph with parameterized states. There are two major contributions in this paper:

  • Combining parametric motions (e.g., a blend of multiple walking animations) with a motion graph (i.e. connections to other motions like running).

  • A method for creating realistic transitions automatically between parametric motions using sampling.

The resulting technology is capable of generating continuous motion that looks realistic, yet is responsive to interactive control of the parameters and type of motion.

A skeleton walking along a line to a goal.

View or download the movie (WMV, 30 Mb).

Here’s the abstract:

In this paper, we present an example-based motion synthesis technique that generates continuous streams of high-fidelity, controllable motion for interactive applications, such as video games. Our method uses a new data structure called a parametric motion graph to describe valid ways of generating linear blend transitions between motion clips dynamically generated through parametric synthesis in realtime. Our system specifically uses blending-based parametric synthesis to accurately generate any motion clip from an entire space of motions by blending together examples from that space.

The key to our technique is using sampling methods to identify and represent good transitions between these spaces of motion parameterized by a continuously valued parameter. This approach allows parametric motion graphs to be constructed with little user effort. Because parametric motion graphs organize all motions of a particular type, such as reaching to different locations on a shelf, using a single, parameterized graph node, they are highly structured, facilitating fast decision-making for interactive character control. We have successfully created interactive characters that perform sequences of requested actions, such as cartwheeling or punching.

Download the paper from the site (PDF, 0.3 Mb):

Parametric Motion Graphs
Heck, Rachel and Gleicher, Michael.
Proceedings of Symposium on Interactive 3D Graphics and Games 2007

Now for a short assessment of the technology based on how simple it would be to use in games.

Applicability to games: 8/10
As a heavily mocap based solution, this solution is ideal for AAA studios. (Rachel and Micheal have worked with many companies, including EA and Rockstar.) The technology works with hand-created animations too, but it isn’t as easy to justify the technological investment in this case.
Usefulness for character AI: 9/10
Having a logical representation like a motion graph for specifying types of motion, and being able to control each of them with simple parameters is practically perfect — and it almost deserves a ten!
Simplicity to implement: 8/10
The sampling algorithms is this paper seem rather simple to implement: just play the animation and gather data to build the transitions. However, there are a few technological dependencies that will take more time, notably building parametric motions automatically (e.g. with correct alignment and synchronization among the different walk cycles).
Directing motion on a screen with a controller.

Directing motion on a screen with a controller.

How do you think this kind of technology can be useful for game AI?

Discussion 4 Comments

gware on October 2nd, 2007

Readers interested in state of the art concerning animation systems should read this paper. I think most "high level" animation engines are using motion graph, and parametric blending. So this paper is totally worth reading for a good insight of how animators and game programmers describes animations and transitions and use them at run-time. For those of you who already know these techniques, this paper describes an very clever way to autogenerate motion graphs. This is very interesting since it can help animators in their integration work. Interesting work could be to use some supervised learning to get rid of the remaining "bad/unwanted transitions". This way animator could train some tool which would check the generated motion graph for invalid transitions using external knowledge (i.e. knowledge from the animator or any other source external to the animation, like some qualifiers in a database or anything that could help). Applicability to games: 10/10 Animation engines are already using both technoques, and motion graph autogeneration should come in the near future (and I believe that this paper is a very good start point) Usefulness for character AI: 10/10 Having an intelligent animation engine is a requirement when asking for believable behaviors. This paper goes this way. The technique deserves 10 :) (of not, whoelse ?) Simplicity to implement: 6/10 Although the algorithms, the implementation and the picture are easy to understand getting this running may require a bit of effort as the number of animation grows. The more data you'll use in your game, bigger the motion graph will be. This means that it may get harder to look for animations at runtime, even with crystal clear transitions.

alexjc on October 6th, 2007

Thanks for your comments Gabriel. I don't like giving 10 because there's always room for improvement. In this case, I think the biggest problem is the large amount of motion capture data required, and the amount of work required by the artists to assist the "automatic" generation of the parametric motion graph... But you're right, it's certainly some of the best animation technology out there. I was lucky enough to work with Rachel at Rockstar on this briefly (that's how Table Tennis is animated)... However, don't underestimate the amount of programming & animation tweaking that's necessary to get this to work. Alex

quanticdream on June 11th, 2010

Hello, I went through this interesting paper, I could visualize it when performing a transition from one animation to another, but consider this scenario... Imagine that there are two parametric nodes (PN): 'Node1' containing two Walk animation (W1 &, W2) and the second PN 'Node2' containing 3 jump animation(J1, J2, & J3). When building the parametric graph (PG) offline, hows are the parametric motion spaces Ns & Nt represented ? Since Ns & Nt can represent infinite motion blends... i.e: 1)Ns can be W1 & W2 blended with weights: w1= [0.0 .. 1.0] & w2=[1-w1] 2)Nt can be J1, J2 & J3 blended with weights: w1= [0.0 .. 1.0] , w2=[0.0 .. 1-w1] & w3=[1-(w1+w2)] and vice versa when Looking-up during runtime?Im just having trouble in visualising the system when doing a transition parametric blend to another parametric blend. Thanks a lto inadvance. Regards Matt

kiwilex on September 15th, 2011

Quanticdream, the transition between node1 and node2 relies on a offline sampling of movements generated from Ns and Nt. The idea is to look for the best candidates for transitions (with the kovar distance function) from sampled Ns postures to sampled Nt postures and to use them to build the edge of the Parametric Motion Graph (i.e. data that can give a bounding boxes of parameter that can be used for the Nt node for a movement generated from Ns node)

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!