At the base of any game AI system, you’ll find actions and conditions. They’re not only the interface to the virtual world, but also the foundations of the whole logic.
Looking at an AI system from this perspective makes things a lot simpler when designing the system, because it helps you focus on the essence of the problem.
Figure 1: Interface between the game and the AI logic.
The Lowest Level
When you think about abstract concepts like “behaviors”, it can be hard to wrap your head around the logic. But thinking in terms of actions and conditions bring you back to the software execution level. Specifically:
During an update, a chunk of code (like an action) is either executed or it’s not. It’s that simple!
All fuzzy concepts like doing something slightly, with difficulty, or half-heartedly are parameterizations of a running behavior.
So on this level, it’s completely clear what the choices are; it’s a matter of selecting which actions to start. Then when an action terminates, one of two things happen:
A flag indicating “I’m finished” is set from the return status.
An observer is is dispatched, providing the same termination status.
The question is, who’s responsible for dealing with what happens next?
A Chain of Responsibility
A consequence of looking at things from the software perspective is that there’s always one thread of execution responsible for any lower-level behavior. This can be considered as a strict hierarchy:
The top levels behaviors execute the lower levels, so there are no ambiguities or conflicts.
Decisions made by consensus or voting are still executed by only one arbitrator.
Modeling this chain of responsibility, like a call stack in programming, is very useful to understand relationships between behaviors. It follows the nature of control flow in software.
Figure 2: A hierarchy of behaviors for an action game.
Typically, the owner sets up an observer when executing a lower-level action, which provides a callback when it’s done. Then, the owner has the choice of:
Handling the situation locally, or
Passing back control to the next level.
This is very useful during design, as you can model the logic from the bottom-up, identifying precisely which callbacks may occur, and handling these in a hierarchical fashion. Have you used a similar approach when designing your AI logic?