Article

How to Build Concurrent Behaviors

Alex J. Champandard on June 26, 2007

For multiple behaviors to run at the same time, they must not compete over the same resources. In practice, you have two design strategies to ensure that your behaviors work well in parallel. Both of these involve getting the most out of the other modules in the game engine — where possible.

Independent Behaviors

Make sure you build your behaviors with very limited responsibilities. This way, whole hierarchies are more likely to run in parallel and not clash on the same resource.

For example, moving on a patrol route requires a walk animation, and a behavior for looking around requires turning the head. Ideally, you should split up all your animation behaviors to control separate parts of the body:

  • Head orientation

  • Torso facing direction

  • Direction of travel

  • Speed of travel

  • Style of motion

… and so on. The advantage here is that you can make full use of your animation system, and allow the AI to express multiple different behaviors through its body — which would not be possible using only a full body behavior. (Of course, the low-level motion capture data could be a full-body animation, but you can select that based on these multiple parameters that are controlled independently.)

Cooperating Behaviors

Alternatively, you can build your behaviors to cooperate together. A good example of this would be a path-finding system that supports multiple targets.

Typically with A* search, you can only set one destination, and the path-finder will give you the most optimal route to it. However, there are algorithms that can support multiple targets without too much extra computation. So, for your behaviors, this means that they can cooperate by each providing their desired target, and letting the path-finder sort it out.

For example, two behaviors for defending an area with patrol and filling up on armor/health could cooperate together when running concurrently. They’d simply pass desired destinations of the required objects (health-packs and patrol corners) to the path-finder and a compromise path would be chosen.

Allocating Resources

Most likely, despite trying to get your behaviors to cooperate, you’ll still end up with clashes. It’s nothing to worry about, and a simple resource allocation strategy can do the trick to prevent problems. The idea is to schedule requests by using a queue, and letting behaviors use the resource one by one in order.

It can be tricky if you want to schedule behaviors based on priorities, but otherwise this provides a very good quality of behavior in general. For example, if you have a team communication behavior running in parallel with an attack behavior, then both may need to use the arms/hands to signal team-mates or use the weapon. Having the AI do one after the other typically looks just fine.

Using this tricks you should be able to get the most out of concurrent behaviors. Do you have any other ways to maximize concurrency?

Discussion 6 Comments

gware on July 2nd, 2007

One pattern I like from ruby's Rake is it's possibility to run tasks in parallel. What's the point here ? As this post emphasizes building your engine to support multiple kind of scheduling for your behaviors will enable you to get the most out of both your platform and your designers. Rake's allow users to define pre-requisite for their tasks. When asked to run a task, it analyses what are the prerequisites: [LIST] [*]If some tasks share a common prerequisite, then the prereq task is run first, [*]all remaining tasks should be able to run in parallel. [/LIST] This simple design allows the build system to run tasks in parallel and thus try to reduce build time to its minimum. Using this simple pattern in your scheduler can help you get the most out of your behaviors. I think this post describe the same idea used by rake: tasks and their prerequisite are cooperative behaviors, while simple tasks with no dependencies are independent behaviors. A scheduler can implement this easily by analyzing what is to be run in his queue. Use an input queue for users: they simply enqueue theirs behavior (which enqueue theirs prerequisites). When running the scheduler, remove common behaviors, and order them in a work queue. Next you can easily find what is to be run in parallel in this work queue and where in the process you should wait for data. In order to easily manage memory, you can just come up with atomic tasks that copy data to be used and use them as prerequisite. [and, maybe, use post process, or reference counting to clean up allocated data from your working pool] A very good thing when building this kind of system is to have a "dry run" operation to help with debugging. I think an issue with this design may be the size of the input queue used by the scheduler. There should be some ways to reduce queue size, isn't it?

gware on July 2nd, 2007

One pattern I like from ruby's Rake is it's possibility to run tasks in parallel. What's the point here ? As this post emphasizes building your engine to support multiple kind of scheduling for your behaviors will enable you to get the most out of both your platform and your designers. Rake's allow users to define pre-requisite for their tasks. When asked to run a task, it analyses what are the prerequisites: [LIST] [*]If some tasks share a common prerequisite, then the prereq task is run first, [*]all remaining tasks should be able to run in parallel. [/LIST] This simple design allows the build system to run tasks in parallel and thus try to reduce build time to its minimum. Using this simple pattern in your scheduler can help you get the most out of your behaviors. I think this post describe the same idea used by rake: tasks and their prerequisite are cooperative behaviors, while simple tasks with no dependencies are independent behaviors. A scheduler can implement this easily by analyzing what is to be run in his queue. Use an input queue for users: they simply enqueue theirs behavior (which enqueue theirs prerequisites). When running the scheduler, remove common behaviors, and order them in a work queue. Next you can easily find what is to be run in parallel in this work queue and where in the process you should wait for data. In order to easily manage memory, you can just come up with atomic tasks that copy data to be used and use them as prerequisite. [and, maybe, use post process, or reference counting to clean up allocated data from your working pool] A very good thing when building this kind of system is to have a "dry run" operation to help with debugging. I think an issue with this design may be the size of the input queue used by the scheduler. There should be some ways to reduce queue size, isn't it?

alexjc on July 2nd, 2007

Great points Gabriel! I haven't looked into Rake yet but it sounds very interesting. It seems the big difference with Rake is that the user is interested in the result of the tasks, whereas in game AI they're interested more in how the behaviors are run. For game AI, however, I think we need some much stronger primitives for synchronizing behaviors within the parallel trees. You can do this by using a resource allocation strategy that behaves as a queue. I mention the parallel briefly in this post with [URL=http://aigamedev.com/hierarchical-logic/advice-2]advice for hierarchical logic[/URL], but I'm writing more about it in the next article about "How to Build Concurrency into Hierarchies." :-) Anyway, it makes a wonderful transition Gabriel! Thanks for your insights. Alex

alexjc on July 2nd, 2007

Great points Gabriel! I haven't looked into Rake yet but it sounds very interesting. It seems the big difference with Rake is that the user is interested in the result of the tasks, whereas in game AI they're interested more in how the behaviors are run. For game AI, however, I think we need some much stronger primitives for synchronizing behaviors within the parallel trees. You can do this by using a resource allocation strategy that behaves as a queue. I mention the parallel briefly in this post with [URL=http://aigamedev.com/hierarchical-logic/advice-2]advice for hierarchical logic[/URL], but I'm writing more about it in the next article about "How to Build Concurrency into Hierarchies." :-) Anyway, it makes a wonderful transition Gabriel! Thanks for your insights. Alex

Ian Morrison on August 20th, 2007

You mention algorithms that allow A* to search for multiple targets. I've tried to find such algorithms with the usual suspects (google, wikipedia, gamedev.net), but haven't managed to come up with anything. What should I be searching for to find information on this?

unixtech on October 31st, 2008

The Breve Multi-Agent Simulation Platform is an excellent choice for modeling real-time physics, neural network, and evolutionary concurrent programs. Some of its features include: A Built-in Client Server Model for decentralized command and control. Run simulations from the command line interface or leverage state of the art 3D rendering in OpenGL. Standards based and industrial strength Python and XML integration. Breve is built on the open standards of the Free Software Foundation licensed under the General Public License, with the freedom of choice. Breve gave us a concurrent simulation standard to accurately model and control firefly synchronization. Fireflies are able to communicate over vast distances and coordinate tasks deep into the network. This has given us deep insight into biologically inspired concurrent resource utilization, and has revealed hidden patterns of parallel computation.

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!