No matter how amazing you make the AI for your actors, the resulting behaviors are fundamentally limited by what means they have to express themselves.
I call this behavior aliasing, inspired by the problems you get when you try to down-sample a high high-resolution representation (like rendering a line on a pixel display).
Aliasing of behaviors is particularly a problem in games. The best solution is to simply make the AI as complex as the representation requires, obviously! But there are certainly other “anti-aliasing” tricks, and for that, it helps to know the two main sources of problems.
Fundamentally in game development, resources are limited. So often you end up with only 14 different animations and 23 sounds to build all the actor behaviors!
Things are a bit better for flagship AAA titles where publishers throw money at the problem… But sadly, this kind of asset aliasing won’t disappear. There must be “resources” to express the behavior at a certain level; it’s the very nature of the problem.
However, there are some workarounds:
- Break down assets into smaller parts and recombine them together.
- Generate assets automatically using procedural technology.
This fixes aliasing problems, but you’ll need more technology to combine the assets together, and create them automatically. On top of that, the result is most often less realistic, which is a difficult trade-off to make.
Aliasing also happens because of the way the game engine is structured. Since this can be fixed entirely with good software design, it’s much less forgivable. Specifically, I’m thinking about the animation interface, which typically works like this:
A programmer builds the animation state machine, only considering the capabilities of the player avatar.
The AI is given another “player” to control using the same high-level animation virtual interface.
It’s is a shame, since AI animations could easily cover a much wider scope than those specified by the player using controller buttons and a directional pad. This engine design is a good way to divide labor, and it’s modular, but lots of information available in the AI is just never used.
The AI typically stores a very good representation of what it’s trying to do: go to this location, perform this task. But instead of expressing those directly, they must be converted into a lower bandwidth format and fed through a virtual controller to the animation system.
The solution in this case is to use a unified control system for animation and AI — at least for the non-player characters. If your AI engine is any good, it should handle this without any trouble!
Have you noticed behavior aliasing? How would you fix it?