Revolution is a strong word, but let's be clear about this... Everything you know about animation technology is evolving, from the subtle details of movement to the radical overhauls of animation pipelines. Firstly, did you notice and secondly, are you ready?
Over the past decade, procedural techniques have gone from research prototypes to being applied reliably in mainstream games on a regular basis. While many associate procedural techniques with experimental Perlin-style animation of cartoon characters, developers in industry have been using a wide variety of robust techniques that fall under the "procedural" label. For instance physics simulation, active controllers, inverse kinematics, equations of motion; all these techniques shift the emphasis from data to code.
Now, as game developers understand the implications of this technology, animation pipelines and runtimes are being redefined accordingly — to make the most of the low-level hardware and open up new opportunities in animation expressiveness. This editorial will dig into some of the problems with current systems, and show examples of changes that are happening throughout industry.
The general trend that's been reinforced over the past few years is that data-heavy animation is not good enough, regardless how elegant your motion graph or animation state machine is. Over the years, developers have been increasingly using blend trees to add more variation, parameterization and style to the resulting animations — but those have introduced as many problems as they resolved on the data front.
a. Motion Capture is Slow as Hell
At the GDC 2011, Naughty Dog presented a technique for optimizing blend trees that was introduced for UNCHARTED 3. It involves clever caching and invalidation of the tree's data, to make sure it takes the least time possible to traverse while offloading the computation.
Animating NPC's in UNCHARTED John Bellomy Download FILE
The previous game already did all its low-level blending on the PS3's SPUs for performance reasons, but that wasn't enough. A significant amount of time was being spent preparing the data on the main CPU before anything could be offloaded. More time is spent streaming, decompressing and shuffling data around than in the actual blending. (This was discussed further via Twitter [1,2].)
The obvious conclusion is that these performance problems are inherent to data-heavy solutions. Regardless how well you optimize your blending, the performance hit is managing all this data. If Naughty Dog is reaching the limits of the hardware here, what chance does everyone else have? :-)
b. Motion Capture Costs a Fortune
Ivo Herzog and Sven van Soom last year presented the approach that Crytek took for the parametric animation in CRYSIS 2, and going forward into CRYSIS 3. (See our coverage from the Game/AI Conference 2011.) The bulk of the locomotion system is actually a parametric animation with three dimensions: slope angle, turning rate, and movement speed. If you support arbitrary angled terrain, that turns into a 4D animation!
In total, this results in dozens of animations that need to be motion captured, and this is just one of many parametric animations that are required to build a locomotion system. Not to mention this is just one character style. You'd need to schedule motion capture sessions for walking (slow to fast), running (jogging to sprint), sneaking or tactical walking, etc.
Parametric Animations & AI in CRYSIS 2 Sven van Soom Download PDF
Obviously, studios the size of Crytek can afford this investment in data & clean-up — but can you?
c. Motion Capture Has Slow Turn-Around
Did you know the big studios that are serious about character animation have their own motion-capture studios? EA Canada in fact has its own building dedicated to it, where FIFA and NBA animations are captured. All this helps reduce the turn-around times of scheduling mocap sessions and getting the results as-soon-as-possible.
The hassle of using motion capture has certainly dropped over the years thanks to more affordable hardware that can work in office-size rooms, or even better clean-up and post processing technology. But the overhead of turning a design into polished character animation states (via motion capture) is still best measured in days.
Can you schedule a motion capture session for your game animations on short notice? Do you have animators available to clean-up the resulting data?
The Turning Point
The good news is that advances in technology, specifically procedural animation, are changing all of this. In some studios it's faster than others, but radical changes are happening nonetheless!
d. Procedural Techniques Provide Better Performance
In a presentation at GDC 2011, Tam Armstrong showed off some of the technology behind HALO: REACH. In particular, he showed how inverse kinematics was used beyond just constraining the feet for short ranges of keyframes while feet are planted. In the HALO engine, IK is now used to adjust the foot position every frame relative to the floor, regardless of whether it's planted or not. (This has visible benefits on slopes.)
The same techniques are used for the upper body. Simple poses provide the context for the animation, where the character is looking or aiming, and the IK take over to adjust the position of the hands to match exactly with the current weapon. Controls are provided to the animators to interpolate between forward- and inverse kinematics.
The Animation of HALO REACH: Raising the Bar Tam Armstrong Download FILE
During the questions and in conversation afterwards, the idea of having IK always on was elaborated further. If you assume the feet and hands will be automatically placed every frame anyway, then you can significantly reduce the amount of motion-capture data you need at runtime. The IK will be able to reproduce it on demand, effectively functioning as a domain-specific compression scheme.
Using IK in this way opens up the doors for better performance on platforms such as the PS3 (and other platforms) since less data needs to be shepherded from the disk and decompressed before hitting the dedicated processors like the SPU (or, arguably a GPU). Since proportionally more time is spent on computation rather than data management, this can be parallelized easier.
e. Procedural Techniques Have Low Overheads
By putting more emphasis on animation code rather than animation data, it becomes significantly easier to make changes. Over the years, customizable blend trees or animation graphs loaded from files have allowed technically-oriented animators to quickly prototype ideas. Similarly, procedural techniques give even more power to animation-oriented programmers and help them get incredible results with even lower overheads.
One specific example of this is OVERGROWTH (see our video interview on the topic). David Rosen is the one-man development team, and is responsible for both creating animations and setting up the underlying code. By using more code (e.g. IK-based locomotion, physics-based combat animations) than traditional AAA games, he manages to get incredible results with a fraction of the budget.
f. Procedural Animation Can Look Amazing
The primary argument made against procedural techniques over the years has been about the quality of the movement. It has managed to produce stylized results, but not necessarily believable animation that can compete with motion capture. However, thanks to a variety of different algorithms and techniques used in animation systems, that is now also changing.
One type of procedural animation that's becoming increasingly popular is physics-driven animation, which puts the emphasis on simulation to generate character movement (or augment it). For instance, MAX PAYNE 3 uses a mix of NaturalMotion's physics-driven animation technology with Rockstare's internal R.A.G.E. engine, which uses a proprietary implementation derived from Bullet Physics.
I worked on early animation prototypes of MAX PAYNE 3's animation for a few years, and of course, there's a large quantity of data under the hood too — in the shape of parametric animations. For instance, when lying on the floor you can control the character to aim in any direction; it's like a form of IK implemented by blending different poses together. The game does push the balance further toward procedural than other games today.
Modern Procedural Techniques
If we had to redesign animation systems knowing what we do now, what would they look like? It's radically different than our current solutions, which rely on a data-heavy workflow.
The term "runtime rig" has been floating around in technical animation circles for a while now, but it's a masterclass with Laurent Ancessi that introduced the concept here at AiGameDev.com. He uses runtime rig to describe an implementation of the animation skeleton that includes additional runtime features to improve control.
In editing tools like Maya or Blender, most game studios use a character "rig" to help animators apply postures to the skeleton while creating animation clips. The runtime rig is an implementation of this within the engine. As a more formal definition, we'll use the following:
“A runtime rig is an additional layer above the traditional skeleton bone hierarchy that provides additional effectors, implemented dynamically in-engine with inverse kinematics, to provide more convenient and higher-level control that complements forward kinematics.”
Of course, game developers have been using inverse kinematics for decades now. What's different is that we can now rely on using it consistently throughout the runtime pipeline, and not only when a foot plant annotation is set or when a weapon is being held. Having the runtime rig always on means we can for example animate end-effectors of the IK chain only (e.g. feet, hands) and rely on runtime computation to figure out the intermediate joint (e.g. knee, elbow).
A large proportion of games use ragdolls these days, driven by a traditional physics simulation under the hood that combines dynamics with collision detection. Some games also use rathermore active ragdolls, using dynamically calculated forces to drive the skeleton toward specific poses. For instance, a character falling might drive towards a flailing animation and a character bouncing against the wall might drive towards a fetal position. As well as this, some games also include physics simulations on "accessory" bones like various clothing items, or even weapons like a ball and chain.
What's relatively new is the idea of having an "active skeleton" that's always on and applied to the whole body. Why limit the physics simulations to hit reactions and on cosmetic bones when it has the potential to help with overall character responsiveness? Looking at two examples.
OVERGROWTH uses such an active skeleton during combat to make sure all hits and responses behave in physically-plausible fashion. The advantage of this approach for Wolfire is that significantly fewer keyframes are required to create combat animation hits and responses.
MAX PAYNE 3's dive animations are also actively simulated, and you'll notice how his posture changes as he collides with obstacles in mid-air. In this case, the advantage of the active skeletons is that you only need a diving pose (single frame) which the physics simulation will augment.
There are down-sides to this approach, including performance of the simulation, stability of the resulting animation, believability of the motion, and technical challenges to set this up. However, as more developers use this approach we're making fast progress in all of these departments.
If you've read this far you'll probably want to attend our intense one-day workshop on character animation technology, with a focus on the procedural techniques mentioned in this editorial. The workshop will bring together the world's best developers in the field — including Ubisoft Montreal (FAR CRY 3, WATCHDOGS), Arkane Studios (DISHONORED), IO Interactive (HITMAN: ABSOLUTION), Microsoft Game Studios, Avalanche (JUST CAUSE), Ubisoft Paris (GHOST RECON: FUTURE SOLDIER), Yager (SPEC OPS: THE LINE), etc.
There are many opportunities that will open up as we start relying on Runtime Rigs and Active Skeletons more often. Of course, this will lead to new challenges as well! To best manage a Runtime Rig, engines will need an additional layer of AI decision making to decide what posture and gestures can be sensibly combined — e.g. wielding weapons. Then, to make the most of an Active Skeleton, engines will also require higher-level situation awareness to be able to drive towards the most appropriate poses in a context sensitive way...
It's only a matter of time before all these ideas make their way into commercial games. Some already have! But for the rest, it's an exciting time to be in animation programming...
» Click here to attend our one-day procedural animation workshop! There are fewer than
15 9 spots left!