The SIGGRAPH conference is always like a treasure chest for anyone working in fields relating to graphics or simulation. This year’s event is still a few months away, but the main papers are already online. Here’s a selection of the best papers that are the most relevant for creating artificial intelligent characters in games.
Some of these research projects take a little effort to see their potential, but the others are barely short of revolutionary!
Group Motion Editing
Starting off with a bang, here’s a paper about adapting motion capture animations of crowds to fit into arbitrary environments. While not all studios may have the resources do capture animations for more than 2 people, this kind of technology could also help with group formations and squads too.
Here’s the abstract:
“Animating a crowd of characters is an important problem in computer graphics. The latest techniques enable highly realistic group motions to be produced in feature animation films and video games. However, interactive methods have not emerged yet for editing the existing group motion of multiple characters. We present an approach to editing group motion as a whole while maintaining its neighborhood formation and individual moving trajectories in the original animation as much as possible. The user can deform a group motion by pinning or dragging individuals. Multiple group motions can be stitched or merged to form a longer or larger group motion while avoiding collisions. These editing operations rely on a novel graph structure, in which vertices represent positions of individuals at specific frames and edges encode neighborhood formations and moving trajectories. We employ a shape-manipulation technique to minimize the distortion of relative arrangements among adjacent vertices while editing the graph structure. The usefulness and flexibility of our approach is demonstrated through examples in which the user creates and edits complex crowd animations interactively using a collection of group motion clips.”
The official home page also contains the video, and the paper is below:
Group Motion Editing T. Kwon, K.H. Lee, J. Lee, S. Takahashi Proceedings of ACM SIGGRAPH '08 Download PDF
This paper seems to get away with providing very obvious conclusions about crowd diversity that game developers have known for years. However, it does provide a little more evidence to support these claims. Here’s the abstract:
“When simulating large crowds, it is inevitable that the models and motions of many virtual characters will be cloned. However, the perceptual impact of this trade-off has never been studied. In this paper, we consider the ways in which an impression of variety can be created - and the perceptual consequences of certain design choices. In a series of experiments designed to test people’s perception of variety in crowds, we found that clones of appearance are far easier to detect than motion clones. Furthermore, we established that cloned models can be masked by color variation, random orientation, and motion. Conversely, the perception of cloned motions remain unaffected by the model on which they are displayed. Other factors that influence the ability to detect clones were examined, such as proximity, model type and characteristic motion. Our results provide novel insights and useful thresholds that will assist in creating more realistic, heterogeneous crowds.”
The home page also hosts a video on the topic, and you read the paper itself here:
Clone Attack! Perception of Crowd Variety Rachel McDonnell, Micheal Larkin, Simon Dobbyn, Steven Collins and Carol O'Sullivan Proceedings of ACM SIGGRAPH '08 Download PDF
Simulation of Stylized Locomotion
Despite it’s questionable approach for developers, Natural Motion’s euphoria has struck a chord with producers and gamers alike — probably due to its ability to leverage physical simulation. Now finally other research projects are showing more sensible ways of approaching the problem, which instead embrace proven techniques like keyframes and motion-capture.
“Animating natural human motion in dynamic environments is difficult because of complex geometric and physical interactions. Simulation provides an automatic solution to parts of this problem, but it needs control systems to produce lifelike motions. This paper describes the systematic computation of controllers that can reproduce a range of locomotion styles in interactive simulations. Given a reference motion that describes the desired style, a derived control system can reproduce that style in simulation and in new environments. Because it produces high-quality motions that are both geometrically and physically consistent with simulated surroundings, interactive animation systems could begin to use this approach along with more established kinematic methods.”
For anyone capable of understanding the maths behind this paper, the implementation would be worth every second spent on it!
Interactive Simulation of Stylized Human Locomotion Marco da Silva, Yeuhi Abe, Jovan Popovi? ACM Transactions on Graphics 2008 Download PDF
Spore’s Motion Retargeting
It’s refreshing to see white papers come from extensive and well funded R&D in some of the biggest game development studios. This paper is a tour-de-force of the technology inside Spore’s animation system. Here’s the abstract:
“Character animation in video games — whether manually key-framed or motion captured—has traditionally relied on codifying skeletons early in a game’s development, and creating animations rigidly tied to these fixed skeleton morphologies. This paper introduces a novel system for animating characters whose morphologies are unknown at the time the animation is created. Our authoring tool allows animators to describe motion using familiar posing and key-framing methods. The system records the data in a morphology-independent form, preserving both the animation’s structural relationships and its stylistic information. At runtime, the generalized data are applied to specific characters to yield pose goals that are supplied to a robust and efficient inverse kinematics solver. This system allows us to animate characters with highly varying skeleton morphologies that did not exist when the animation was authored, and, indeed, may be radically different than anything the original animator envisioned.”
Ever since it was first announced and demonstrated, game developers around the world have been wondering if Spore’s magical animation system could be used to help animate characters in other games more easily. This paper answers that question: probably not! (See the paper’s home page for the FAQ.)
Real-time Motion Retargeting to Highly Varied User-Created Morphologies Chris Hecker, B. Raabe, R.W. Enslow, J. DeWeese, J. Maynard and K. van Prooijen Proceedings of ACM SIGGRAPH '08 Download PDF
Adapting Simulated Skills
This is a paper that focuses on adapting existing motion controllers to perform physically accurate tasks. It sounds nice in theory, and there are some interesting techniques under the hood, but the demo application is simply nowhere near good enough for games yet.
“Modeling the large space of possible human motions requires scalable techniques. Generalizing from example motions or example controllers is one way to provide the required scalability. We present techniques for generalizing a controller for physics-based walking to significantly different tasks, such as climbing a large step up, or pushing a heavy object. Continuation methods solve such problems using a progressive sequence of problems that trace a path from an existing solved problem to the final desired-but-unsolved problem. Each step in the continuation sequence makes progress towards the target problem while further adapting the solution. We describe and evaluate a number of choices in applying continuation methods to adapting walking gaits for tasks involving interaction with the environment. The methods have been success-fully applied to automatically adapt a regular cyclic walk to climbing a 65cm step, stepping over a 55cm sill, pushing heavy furniture, walking up steep inclines, and walking on ice. The continuation path further provides parameterized solutions to these problems.”
That’s the abstract of the paper you can download below:
Continuation Methods for Adapting Simulated Skills K. Yin, S. Coros, P. Beaudoin, and M. van de Panne. Proceedings of ACM SIGGRAPH '08 Download PDF
Parallel GPU Programming
This last paper isn’t applied to anything relating to game AI, but it’s very generally applicable to the kinds of problems faced by game developers. In fact, some forum members here at AiGameDev.com are looking at ways of leveraging GPUs for AI calculations. This kind of technology could really help here:
“We present BSGP, a new programming language for general purpose computation on the GPU. A BSGP program looks much the same as a sequential C program. Programmers only need to supply a bare minimum of extra information to describe parallel processing on GPUs. As a result, BSGP programs are easy to read, write, and maintain. Moreover, the ease of programming does not come at the cost of performance. A well-designed BSGP compiler converts BSGP programs to kernels and combines them using optimally allocated temporary streams. In our benchmark, BSGP programs achieve similar or better performance than well-optimized CUDA programs, while the source code complexity and programming time are significantly reduced. To test BSGP’s code efficiency and ease of programming, we implemented a variety of GPU applications, including a highly sophisticated X3D parser that would be extremely difficult to develop with existing GPU programming languages.”
For more details, download the paper below:
BSGP: Bulk-Synchronous GPU Programming Qiming Hou, Kun Zhou and Baining Guo Proceedings of ACM SIGGRAPH '08 Download PDF
Stay tuned for more in depth analysis of these papers over the next few months as SIGGRAPH approaches.