Article
files/aliasing

How to Prevent Behavior Aliasing

Alex J. Champandard on July 20, 2007

No matter how amazing you make the AI for your actors, the resulting behaviors are fundamentally limited by what means they have to express themselves.

I call this behavior aliasing, inspired by the problems you get when you try to down-sample a high high-resolution representation (like rendering a line on a pixel display).

Aliasing of behaviors is particularly a problem in games. The best solution is to simply make the AI as complex as the representation requires, obviously! But there are certainly other “anti-aliasing” tricks, and for that, it helps to know the two main sources of problems.

Canned Assets

Fundamentally in game development, resources are limited. So often you end up with only 14 different animations and 23 sounds to build all the actor behaviors!

Things are a bit better for flagship AAA titles where publishers throw money at the problem… But sadly, this kind of asset aliasing won’t disappear. There must be “resources” to express the behavior at a certain level; it’s the very nature of the problem.

However, there are some workarounds:

  • Break down assets into smaller parts and recombine them together.
  • Generate assets automatically using procedural technology.

This fixes aliasing problems, but you’ll need more technology to combine the assets together, and create them automatically. On top of that, the result is most often less realistic, which is a difficult trade-off to make.

Architecture

Aliasing also happens because of the way the game engine is structured. Since this can be fixed entirely with good software design, it’s much less forgivable. Specifically, I’m thinking about the animation interface, which typically works like this:

  1. A programmer builds the animation state machine, only considering the capabilities of the player avatar.

  2. The AI is given another “player” to control using the same high-level animation virtual interface.

It’s is a shame, since AI animations could easily cover a much wider scope than those specified by the player using controller buttons and a directional pad. This engine design is a good way to divide labor, and it’s modular, but lots of information available in the AI is just never used.

The AI typically stores a very good representation of what it’s trying to do: go to this location, perform this task. But instead of expressing those directly, they must be converted into a lower bandwidth format and fed through a virtual controller to the animation system.

The solution in this case is to use a unified control system for animation and AI — at least for the non-player characters. If your AI engine is any good, it should handle this without any trouble!

Have you noticed behavior aliasing? How would you fix it?

Discussion 3 Comments

Sergio on July 24th, 2007

The bottleneck of the presentation layer is, to me, the biggest problem we face when trying to create realistic AI. If we could solve this satisfactorily, then we wouldn't need cutscenes in games to make them feel like movies. It's impossible, even with good resources, to anticipate all possible animations and dialog that agents will need during the game. And, sadly, that's all that players will experience. No matter how advanced our decision-making logic is, players will only see the animation that the agent chose at the end. So I wonder if it makes sense writing our AI the other way around. Instead of creating sophisticated models of the internal state of the agent, and then "dumbing" them down to what's available for presenting that to the player, how about starting from the presentation, and working up? You begin with a fixed set of animations, and try to devise situations where those animations are relevant. Then, you write the code to support them. It sounds like a small set of animations could be better controlled by a simple list of predefined behaviors, than by a hard-to-tune emergent system. I think this is the number one reason game AI developers are so fond of good-old FSMs. At least this should work for current games. As technologies mature, we can expect more content to get created, and more options to become available. It's still a long shot from purely procedural content...for humans. But maybe opportunities will arise in other types of games. Maybe we could create a game where the characters are robots, with synthesized voices and procedural (R2D2-like) animation. Spore could be the perfect example on how this should work, but it's also the perfect example on how the technology it's still far from mature.

gware on July 25th, 2007

Joel, I think you have a very good point here : animations are ONLY ONE of the presentation layer Sergio was talking about. IMO, sound engines are easily underestimated : sound is a very good medium for immersion. I believe games are like movies : they need good sound tracks. But even when considering this, we still end up being very limited by the assets. We still see programmers doing FSMs for animations and sounds effects. I would like to see AI driving Animations and SoundEngine : mining databasses for good animations and sound effects and planning what to do, given current world representation and goals. Using this you can try to remove low level inputs, like fsm, and let a planner handle asset management given high level goals.

alexjc on July 25th, 2007

[B]Sergio,[/B] Your comments on bottom-up design inspired me to write yesterday's post about the [URL=http://aigamedev.com/methodology/top-down-vs-bottom-up-design]two different methodologies[/URL] (the other being top-down). I think it's certainly very important to take assets and the "presentation layer" into account when designing.. But having a mental model is very necessary as it fits into the overall game design, provides a nice logical way to structure the behaviors, and gives you guidance to add new assets. So even if you design bottom-up you can't ignore the top-down approach either. [B]Joel,[/B] That's a great example. Thief was certainly groundbreaking in that respect. It's a good way to make sure the mental model is actually useful for more than just structuring behaviors! But somehow, I was always very frustrated when playing the game. First it's annoying to have such obvious characters (like the loud guy in action movies who gets killed first), and secondly it takes away any kind of doubt. That approach does not tease the player's mind very much. It's great to see games using these ideas and exploiting them in better ways, like having a guy in radio contact saying he heard something. It's much more subtle and not as annoying! [B]Gabriel,[/B] You're absolutely right about mining large databases for assets. That's something developers are starting to do already. Most of the AAA publishers can afford to have huge motion-capture sessions which results in incredible amounts of data. There are some great techniques to extract interesting motions and blend them together (e.g. motion graphs, motion families, etc.). I worked on shared animation technology for Rockstar's internal middleware for a while, and we were looking into some very interesting technology. I'll write more about it in a few weeks... (Not sure about applying that to audio though, I don't have much experience there.) Alex

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!