Article
files/i3d07

Quick Transitions With Cached Multi-way Blends

Alex J. Champandard on October 4, 2007

This week’s Thursday Theory continues coverage from the Symposium on Interactive 3D Games and Graphics 2007 with another whitepaper about character animation. See the previous posts for more technology applicable to game AI.

Today’s entry is from the University of California, Berkeley and introduces techniques for performing quick transitions between motions using multi-way blends. (Thanks to Leslie Ikemoto for providing the new link; it’s not even in Google yet!)

There are three main contributions in this paper:

  • The idea of improving the quality and speed of blends between different motions (e.g. walking and skipping) by using similar motions as intermediate steps for blending (e.g. running).

  • An algorithm for finding transitions automatically by measuring the similarity between short animation clips and clustering them together.

  • A learning classifier to measure the realism of motion transitions that are chosen by the algorithm to help select the best candidate.

The resulting technology allows much faster high-quality blends between animations compared to traditional motion graphs. The transitions are also not restricted to certain frames in the animation; another matching clip can be blended in at any time of the animation.

Timeline of character animated from left to right.

View or download the movie (MOV, 22 Mb).

Here’s the abstract:

We describe a discriminative method for distinguishing natural-looking from unnatural-looking motion. Our method is based on physical and data-driven features of motion to which humans seem sensitive. We demonstrate that our technique is significantly more accurate than current alternatives.

We use this technique as the testing part of a hypothesize-and-test motion synthesis procedure. The mechanism we build using this procedure can quickly provide an application with a transition of user-specified duration from any frame in a motion collection to any other frame in the collection. During pre-processing, we search all possible 2-, 3-, and 4-way blends between representative samples of motion obtained using clustering. The blends are automatically evaluated, and the recipe (i.e., the representatives and the set of weighting functions) that created the best blend is cached.

At run-time, we build a transition between motions by matching a future window of the source motion to a representative, matching the past of the target motion to a representative, and then applying the blend recipe recovered from the cache to source and target motion. People seem sensitive to poor contact with the environment like sliding foot plants. We determine appropriate temporal and positional constraints for each foot plant using a novel technique, then apply an off-the-shelf inverse kinematics technique to enforce the constraints. This synthesis procedure yields good-looking transitions between distinct motions with very low online cost.

Download the paper from the site (PDF, 447 Kb):

Quick Transitions with Cached Multi-way Blends
Ikemoto L., Akiran O. and Forsyth D.
Proceedings of Symposium on Interactive 3D Graphics and Games 2007

Now for a quick evaluation of the technology based on how easy it would be to use for game AI.

Applicability to games: 8/10
This kind of technology is suitable for studios that have lots of motion capture data. The approach in this paper has the advantage of replacing the traditional motion graphs, offering better/faster blends in the process. Thanks to the clustering, little effort is required from the animators to classify the motion clips — although it’ll most likely be necessary to annotate the clusters anyway.
Usefulness for character AI: 8/10
Having fast blends for AI is less of a requirement than for player control, but it’s certainly a nice feature to be able to blend from any frame to any other clip at any time. However, to provide the AI with an intuitive interface, an extra data structure is required to lookup motion clips based on their type and the desired parameter (e.g. walking fast).
Simplicity to implement: 3/10
The ideas in this paper require a non-negligible investment in base technology. You’ll need similar technology used to generate motion graphs as a base (e.g. calculating distance grids, dynamic programming). On top of that, a custom classifier based on Hidden Markov Models is necessary as well as a k-means clustering algorithm in “spectral embedded space”.
Characters lined up.

Screenshot:Comparing the performance of different classifiers.

How do you think such an approach can be applied in games for improving the AI?

Discussion 0 Comments

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!