The latest addition to list of white papers on game AI in the AiGameDev.com wiki and to our growing collection of technical videos is a new research project on the subject of motion planning. Special thanks to Lucas Kovar for pointing it out. (Reminder: We’re running a masterclass on animation with Lucas this weekend if you’re a member.)
This paper by Wan-Yen Lo and Matthias Zwicker, presented at this year’s Symposium on Computer Animation 2008, shows how to apply reinforcement learning to the problem of planning motion based on parameterized animations. Technically, this is quite a challenging problem, so it’s nice to see people trying to address this and find efficient solutions that can potentially be used for controlling game characters.
Increasingly, games rely on more and more motion capture data — which causes many problems in practice:
Building motion controllers manually to decide, for example, how you can move to a specific location or pick up and object can take a lot of work. Automating this, however can be difficult in practice.
Planners provide one approach to automatically working out valid sequences of motion segments, but they struggle with the size of motion graphs used in modern games.
Solutions based on reinforcement learning can be a little unstable, and also have trouble dealing with the complexity of the problem in practice.
The paper described here addresses these issues by automatically calculating a reactive policy for deciding which animations to play.
The main contribution of this work is to apply reinforcement learning to the problem, with many improvements. In particular:
The RL technique is based on a “tree-based fitted iteration algorithm” which approximate the optimal long-term reward function using a planning like approach.
The system presented here can deal with parametric motions, so it’s possible to learn a policy to control an actor to reach the exact point in space. (Applying reinforcement learning to such continuous problems is not a trivial task.)
This solution can help automatically craft locomotion controllers based on an underlying parametric graph, and even plan to pick up and interact with arbitrary objects in space.
Figure 1: Partitioning of space to help guide the algorithm when searching for control policies.
Abstract & References
The abstract of the paper is the following:
“We present a novel approach to learn motion controllers for real-time character animation based on motion capture data. We employ a tree-based regression algorithm for reinforcement learning, which enables us to generate motions that require planning. This approach is more flexible and more robust than previous strategies. We also extend the learning framework to include parameterized motions and interpolation. This enables us to control the character more precisely with a small amount of motion data. Finally, we present results of our algorithm for three different types of controllers.”
You can download the paper from the website:
Real-Time Planning for Parameterized Human Motion Wan-Yen Lo and Matthias Zwicker Eurographics Symposium on Computer Animation, 2008. Download PDF (3.3 Mb)
In practice, here’s how it all applies to artificial intelligence in game development.
- Applicability to games: 6/10
- Game developers typically try to create predictable solutions technically to solve problems in AI and animation. Such a reinforcement learning system could help craft animation policies offline, and seems like a natural complement to any solutions based on planning. However, the down side is that it’s not sure how well this solution scales up to handling many motions and many parameters — which makes it a bit of a risk for game developers.
- Usefulness for character AI: 7/10
- The general idea of pre-calculating reactive policies from a motion graph search is particularly useful, since many game actors will require tight control over the movement without too much replanning. Though, these kinds of precalculations can be done in many different ways, and it’s arguable whether a manually crafted controller wouldn’t be worth the investment even if it did require more time.
- Simplicity to implement: 3/10
- This kind of animation technology stack in general takes a lot of time to get up and running, and even if you’ve done this kind of work before it can take months before you’re ready to provide interactive control. On top of the parametric motions, you’ll then need to build a reinforcement learning system as well as the basic tree search, so adopting the solution presented in this paper is not an easy decision to make.
Figure 2: Encoding of continuous motion parameters so they can be solved using reinforcement learning.
What do you think about this technology and its application to game AI? Post a comment below.