Article
files/siggraph07

Near-optimal Character Animation with Continuous Control

Alex J. Champandard on August 4, 2007

Siggraph 2007 starts tomorrow. This post continues from yesterday’s review of character animation technology, discussing how this year’s innovations can be applied to game AI.

There’s another paper from University of Washington Animation Research Labs which presents a near-optimal continuous controller. There are two major innovations used:

  • A low-dimensional reinforcement learning algorithm to learn a value function.

  • A real-time controller to select motion clips based on the lookup value.

The advantage of this approach is less work is required to create a new type of motion (a.k.a. controller). The animator must prepare and annotate motion capture clips as individual gaits. Then with the help of a programmer, specify the objectives of the controller (e.g. moving in the specified direction). The rest is done automatically, providing realistic and near-optimal blends.

Preview of the movie

View or download the movie (MOV, 139 Mb).

Here’s the abstract:

We present a new model for real-time character animation with multidimensional, interactive control. The underlying motion engine is data-driven, enables rapid transitions, and automatically enforces foot-skate constraints without inverse kinematics. On top of this motion space, our algorithm learns approximately optimal controllers which use a compact basis representation to guide the system through multidimensional state-goal spaces. These controllers enable real-time character animation that fluidly responds to changing user directives and environmental constraints.

Download the paper (PDF, 0.8 Mb):

Near-optimal Character Animation with Continuous Control
Treuille, A. Lee, Y. Popovi?, Z.
ACM Transactions on Graphics 26(3)

Now for a brief assessment of how easy it would be to integrate the technology in upcoming games.

Applicability to games: 7/10
Generally useful, but only studios that have many animations available would really benefit.
Usefulness for character AI: 8/10
It provides better and more realistic animated behaviors. Controllers for types of motion are easier to create.
Simplicity to implement: 4/10
Requires a blend-tree (with optional modifications to prevent foot-skating), and a good understanding of reinforcement learning.
3D Plot of 9 Graphs

Low-dimensionality polynomials used to approximate the value function.

How do you think this technology can be best used to improve game AI?

Discussion 0 Comments

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!