Game::AI++ is an extensible decision making and control system, specifically designed for actor behaviors in virtual simulations. The library uses a unique combination of existing techniques to overcome three specific challenges:
Challenge: Integration of Designer Input with Automated Algorithms
Generally, AI is a balance between pure reasoning and domain experience. In the games industry, this corresponds to two opposite approaches for building decision making and control systems.
A simple behavior tree.
Human experts create the behaviors by hand, micro-managing every detail. Typically in games, this is done with a combination of state machines and behavior trees, where the designers specify almost everything by hand. Then, the hierarchical structure is traversed top down, all the way down to the leaf to find valid actions, which execute greedily.
A planner works out the correct order of behaviors, typically using the STRIPS algorithms. This is like path-finding to combine low-level actions with A* to find a way to reach the goal. There’s no need to assemble the behaviors directly, but it requires more programming to model the world, annotate each action with preconditions and side effects, gather data then execute the plan, etc.
A simple STRIPS search tree.
Ideally you want both; it should be possible for designers to control everything if they wish to (e.g. to add realistic human-like quirks the behaviors), but otherwise let the algorithms take over (using pure logic to find intelligent solutions). This would spread the workload more evenly among the development team; the designer doesn’t have to baby-sit a state machine or behavior tree, and the programmer wouldn’t need to explain everything about “realistic” behavior to a planner.
Solution: Express Each Behavior as a Custom Search
The common technique behind both approaches is the tree search. A planner searches a uniform tree of dependencies and side effects, and a behavior tree has layers of different behaviors that traverse their children. So whether the search tree is structured backwards from the goal, forwards from the current state, or top-down from the root behavior, they still fundamentally all trees.
To unify these two approaches, it should be easy to create behaviors that are custom search interpreters. So for example:
A node in the planner’s search tree would recursively ask its dependencies to search themselves.
A node in the top-down tree requests its child behaviors to run and accomplish their task.
Each of these nodes overrides the search locally, and does not know other nodes are searched. This makes the system modular, and it then becomes possible to mix and match between the different kind of nodes, thereby assembling different kinds of trees together. (This is, in effect, a form of hierarchical planning.)
A custom search tree; each behavior is modular.
It’s simple in theory, but there are two catches to get this right:
- Having only one search interpreter, like A*, is very fast. Having different interpreters in each node slows things down — most likely by an order of magnitude. But, these custom nodes are great ways to provide domain experience, like in a hierarchical path-finder. This speeds up the overall search by multiple orders of magnitude, so you end up with a net gain for most trees. (The fastest planners are hierarchical ones, not heuristic ones.)
- Ultimately, the system must also be easy to use by the designers, at least as much as the behavior trees are. To do this, it’s necessary to edit the tree as a set of chunks that plug together well. Providing common top-down control primitives like sequences, selectors and decorators achieves this for designers. It also becomes easy for programmers to annotate actions with dependencies and side effects.
Challenge: Unification of Planning and Execution
Making high quality behaviors, both intelligent and realistic ones, involves combining decision making (planning) with control and monitoring (execution). They’re equally important, yet there’s a tendency in most games to focus on either one or the other…
A behavior tree uses almost no planning. Instead, it cleverly uses greedy search for behaviors that match, so that no looking ahead is necessary (for well designed trees). But it’s not entirely reliable; designers can’t make sure that every behavior tree works in all contexts.
A raw planner provides no execution, it assumes that the plan itself can be executed correctly. Obviously, when it fails then the planner starts again from scratch – and that’s not the best approach on large problems.
So why unify them? A complex behavior can’t be fully planned upfront, so it should be easy to change a plan dynamically as changes occur in the world. Dealing with failure is a very human quality and getting it right is important. By integrating the planning tree with the execution tree, the designer gets as much control over both of them, and can specify how they integrate together to make the most realistic behaviors.
For example, it would be possible to:
Support lazy replanning, where problems are dealt with on the fly when they occur.
Support incremental planning, as only the start of the plan would be executable.
Fake human planning by executing certain behaviors while planning, e.g. talk about options or make gestures, or taking time to remember information.
Also, unifying the two algorithms results in much simpler code! So, how does Game::AI++ achieve this?
Solution: Everything is a Behavior Transformation
Assume the following; the outcome of planning is a set of possible behaviors to be executed, and the outcome of execution is a side effect in the world. Both also, indicate whether they succeed or fail. Ultimately it’s possible to treat them all in the same way by looking at them as behaviors that return other valid behaviors in that context.
A planning behavior returns a list of possible behaviors to be executed in the world.
An execution behavior, once run, returns an empty behavior that does nothing.
A planning and execution behavior will keep returning different trees that combine planning and execution.
A tree with two valid transformations in this context.
This process is like transforming a behavior tree into other trees that are valid in that context. The client behavior can then select which resulting tree it wants to run.
Challenge: Ubiquitous Support for Concurrency
The most realistic and intelligent behaviors involve doing multiple things at the same time. In particular, it’s important for game actors to:
Support multiple long-term behaviors.
Handle events and short-term responses at the same time.
Plan on multiple independent problems simultaneously.
Here’s how Game::AI++ handles it.
Solution: Implement Behavior Search with Cooperative Scheduling
Behaviors for both planning and execution are written as coroutines. These are functions with their own memory (closures) that can pause and resume where they left off. For convenience, there’s a scheduler responsible for managing these coroutines.
The process of searching another node is done by creating a transformation, which behaves like a lazy iterator/generator. The owner of a behavior can request transformations by providing callback, then waiting for the scheduler to process that search.
In effect, all the behaviors are collaborating together at all times to produce the overall result.
If you don’t understand something, or if part of the design does not make sense to you, then feel free to ask by posting a comment. The best questions stand a chance to win an AiGameDev.com T-shirt at the end of the month!