This paper is from the University of British Columbia and introduces techniques for physically-based biped locomotion control. There are two contributions in this paper:
It provides a simple way to author biped controllers from specified parameters or mo-cap data.
To make the simulation stable, feedback error learning is used to control joints.
The result is a biped that’s stable when pushed and can deal with unexpected terrain. Large controllers can be easily created without expensive motion capture data, at the cost of realism.
View or download the movie (MOV, 24 Mb).
Here’s the abstract:
Physics-based simulation and control of biped locomotion is difficult because bipeds are unstable, under-actuated, high-dimensional dynamical systems. We develop a simple control strategy that can be used to generate a large variety of gaits and styles in real-time, including walking in all directions (forwards, backwards, sideways, turning), running, skipping, and hopping. Controllers can be developed using motion capture data or can be authored using a small number of parameters.
The controllers are applied to 2D and 3D physically-simulated character models. Their robustness is demonstrated with respect to pushes in all directions, unexpected steps and slopes, and unexpected variations in kinematic and dynamic parameters. Direct transitions between controllers are demonstrated as well as parameterized control of changes in direction and speed. Feedback-error learning is applied to learn predictive torque models, which allows for the low-gain control that typifies many natural motions as well as producing smoother simulated motion.
SIMBICON: Simple Biped Locomotion Control KangKang Yin, Kevin Loken, and Michiel van de Panne ACM Transactions on Graphics 26(3)
Here’s a quick assessment of the technology based on how easy it would be to use for upcoming games.
- Applicability to games: 5/10
- Having a physical controller is useful for generating animations without motion capture. However, games have come to expect a bit more realism than this version of the technology can provide. Like Natural Motion, it’s probably best used for short amounts of time before being blended back to real mo-cap clips.
- Usefulness for character AI: 4/10
- The controller is useful for dealing with pushes or balancing, but these are not major problems for game AI these days (simple animations can do a tremendous job here). Additionally, the walk controller isn’t fully stable on stairs; see the Java applet.
- Simplicity to implement: 8/10
- On the up-side, this is probably the easiest to implement. It requires a state machine with certain key poses, and simple proportional-derivative (PD) controllers. The feedback error learning isn’t required, but seems a great idea for tracking or retargetting mo-cap animations.
Biped motion from state machines.
Do you expect such technology to find a use in game AI?