Interview
leslie

Striving Towards Believable Animation for Character Behaviors

Alex J. Champandard on November 22, 2007

This week’s Thursday Theory post features an interview with Leslie Ikemoto. Leslie got her Ph.D. in animation research, notably applying artificial intelligence technology to make the resulting behaviors more believable with less effort. Not only has she published multiple papers about techniques for video games, but she has also helped studios integrate these ideas into games.

I caught up with Leslie by email recently. You can find out more about her work by visiting her website.

Motion Retargeting

Screenshot 1: Motion retargeting using Gaussian processes.

Alex Champandard: Hi Leslie, thanks for taking the time to answer these questions. Could you introduce yourself briefly and give us a little background into your research?

Leslie Ikemoto: Hi Alex, and thanks for inviting me to do this interview! I’m really honored.

I just graduated with my PhD in computer science from UC Berkeley. My advisor was David Forsyth. My thesis work was on synthesizing character animation, specifically on designing techniques to make a character animator’s life easier. Some of the work an artist has to do is pretty repetitive. We try to automate the repetitive parts using semi-supervised learning algorithms, hopefully freeing the artist to concentrate on the more creative parts.

AC: Congratulations on finishing your Ph.D. by the way! It seems you applied many different technologies to generating realistic and responsive motion. Let’s start with reinforcement learning (RL) first. In retrospect from your work on Learning to Move Autonomously, how do you think RL helps improve game AI and animation?

LI: Thanks! It was a long haul, but I’m glad I did it.

“The big promise of reinforcement learning is in emergent behaviors.”

In my mind, the big promise of RL is in emergent behaviors. In other words, the designer doesn’t tell the character explicitly what to do (e.g., run to the base and steal the flag), and doesn’t tell the character how to do it. Instead, the designer sets up a reward structure that tells the character what’s good (stealing the flag) and what’s bad (being seen/shot by enemies). Then RL algorithms figure out strategies for the character to steal the flag without being shot, like ducking behind walls and peeking out before running for the next wall. As an added bonus, the designer can easily create characters who seem to exhibit particular traits by simply varying the reward values. For example, if the designer penalizes a character less for being seen by an enemy, that character will seem bolder.

Other documents from Leslie Ikemoto about this research:

AC: What do you think generally about RL technology today? Did you run into many problems in the process of applying it to actor behaviors and animations?

LI: Yes, I did run into problems. The biggest problem I found is that an RL-based algorithm is hard to debug. There’s a lot to design (state and action spaces, reward structure) and a lot to code. When the character wasn’t acting how I wanted him too, it was really difficult to tell what was going wrong. Was my state space representation bad? Were my reward parameters wrong? Did I need to let the training run longer? Were my training scenarios bad?

Reinforcement Learning

Screenshot 2: Reinforcement learning to avoid dynamic obstacles.

AC: Certain game AI middleware vendors increasingly seem to be turning towards RL to help improve the quality of goal directed behaviors. Do you feel RL has reached a stage where it can be applied into the AI of commercial games?

LI: There is an impressive project from UT Austin called the Nero project, which is a RL-based video game. I think Nero is evidence that RL might be ready to be applied commercially in some situations. One of the clever aspects of Nero is making the difficulty of training a character to do what you want him to part of the game.

The big challenge I see to making RL commercially viable for other games in which the AI is pre-packaged is that it can be difficult to check that your character is behaving how you want him to. RL algorithms can design a complex strategy for a character automatically, which is good. But, the designer needs to check that in every situation, of which there may be many in a complex game, the character is doing something that’s intelligent (or at least not overtly stupid).

AC: Do you have any advice for developers looking to investigate into reinforcement learning?

Reinforcement Learning: An Introduction

LI: A really solid understanding of the book ‘Reinforcement Learning: An Introduction‘ (Amazon US, UK) by Sutton & Barto is a good way to start. There is an HTML version of the book here if you don’t want to buy it. This book goes over all of the basics of RL in a way that’s easy to understand (given that it’s a difficult topic). Once you understand this book, understanding more modern RL algorithms is much easier.

AC: Moving on to your more recent animation work, specifically your Quick Motion Transitions paper. How do you feel about the technology in retrospect, specifically looking at it from a games perspective? What kind of research would you like to see done in this area?

LI: I feel like the underlying technology is perhaps more general than I gave it credit for in the paper. I gave a presentation on this work at SIGGRAPH 2006 and afterwards, some guys from EA were talking to me about how one might use this technology to marry character simulation and motion capture. Their idea was that one could potentially use the compressed transition table I talked about to quickly find a good motion capture clip to blend onto after simulation. I think this sounds like a cool idea, and I suspect that there may be further clever applications too that would be useful for games.

Motion Transitions

Screenshot 3: Quick motion transitions.

On the research side, I think it would be great to look into better algorithms to score motion automatically. There are lots of ways to synthesize motion, but every method I know of fails in some situations. It would be nice to know when failure happens, because then one could employ mitigating strategies (like synthesizing a different motion).

Other documents from Leslie Ikemoto about this research:

AC: I’ve worked on such advanced animation systems before, and sometimes it scares me how much foundation work is required before getting to the cool stuff! If you had to start from scratch with animation technology for a game, how would you approach the problem? Are there any commercial or open-source libraries you would use?

LI: There aren’t any open-source libraries I’m aware of for motion synthesis, but I think it would be great if someone created one! I haven’t had the opportunity to try out any commerical libraries, but Natural Motion’s Morpheme and Havok’s Behavior engine both look neat. You’re absolutely right that there is a lot of foundation work that needs to be done.

“Motion synthesis is only for those with tough stomaches.”

Motion synthesis is only for those with tough stomaches. The motion synthesis needs for different games really varies, so I’d start by understanding what those needs are. Is responsiveness to player commands really important, or can the player stand a little delay for the motion to look nice? How much space for motion assets will I have? How important is physically valid motion? These needs will dictate feasible approaches.

If you’re new to the motion synthesis game, I’d then read the SIGGRAPH 2002 papers about motion synthesis. In my mind, that was a banner year for motion synthesis, with several influential, often-cited papers published in both example-driven and physically-based synthesis. Once you understand these papers, you’re well on your way to understanding the papers that came afterwards. There was also a SIGGRAPH course this year (2007) on motion synthesis that may be helpful.

AC: You mentioned working on technology to make it easier to edit large sets of motion. Do you mind telling us more about this top secret project, and how it could help assist the game development?

LI: This work is all about how to make an artist’s life easier by trying to automate the repetitive parts of animating a character. Let’s say an animator wants to create a character with a cool strut, but all he has is plain vanilla walking animation clips. He can edit one of his clips and, using our technique, the system will automatically propagate the edits to the other clips. For example, if he edits a straight-ahead walking clip, the edits will get propagated automatically to his turning left and right clips, his walking backwards clip, etc. If he didn’t like the way the edits turned out, he can correct the results, and the system learns from the corrections. We formulate this problem as a regression problem and use a very nice regression technique called Gaussian processes to generate the predicted edits.

Generalizing Motion Edits

Screenshot 4: Generalizing motion editing.

AC: It seems Gaussian Processes have become the cool kid on the block! Could you explain briefly what makes them so useful for your project, and whether they can be applied to other problems in AI or game development?

LI: Let x be an animation clip an artist wants to edit, and y be the edited animation. We want to figure out a function f that relates x to y, such that given a different animation clip x* in the original style, we can get the corresponding edited clip y* = f(x*).

How can we figure out f? There are lots of ways, but what makes Gaussian Processes specifically good for our motion editing problem is that it is non-parametric, so it doesn’t require as much training data as many parametric models (which means less work for the artist trying to train the system). Also, it can accomodate context-dependent edits, so the artist can apply one type of edit while the character is standing and another while the character is walking. In addition, it gives us a measure of uncertainty for each prediction, which without getting into too many details, is important for combining the results of multiple predictors.

Gaussian Processes can definitely be used for other problems in AI or game development. Many problems can be formulated as regression problems, and GPs have some compelling advantages over other methods. However, since it is a relatively memory-intensive and slow technique (we produced predictions at 2-3 frames per second), I think it is most suitable for data that can be preprocessed.

Stylized Animation

Screenshot 5: Applying stylized edits to different motions.

AC: Where do you hope to take this project in the near future? What’s in store for you next in terms of research and/or development?

LI: This paper is under review, so hopefully it’ll be accepted, published, and of course, widely adopted! That would be a dream come true.

What’s in store for me next is a start-up. It’s in top-secret mode, so I can’t reveal anything yet, but keep an eye on www.animate-me.com.

AC: Thanks for your time and best of luck with the project(s)!

LI: Thanks again, Alex! This was fun!

Discussion 0 Comments

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!