Memory Pooling for AI Tasks

Alex J. Champandard on June 19, 2007

With many large task hierarchies running in parallel, it can be impossible or undesirable to store everything in memory. It may be faster to allocate all the memory upfront for small trees, but in practice a lazy approach is better. However, because many tasks need their own custom memory (of different type and size), this policy causes lots of small memory allocations.

This is bad for a couple of reasons:

  • Allocations are very expensive, even for small objects.

  • Fragmentation reduces the total amount of memory available.

Obviously, memory allocations are important to get right consoles or mobile devices, but even on the PC you can get a huge performance boost.

One possible solution is to use memory polling. This is achieved by following these instructions:

  1. Try to keep all task memory of similar sizes (the part that’s dynamically allocated).

  2. Pre-allocate a large buffer of memory and split it into fixed-size chunks.

  3. Maintain a list of free chunks from this large pool.

  4. When allocating, just construct in-place directly in an available chunk.

  5. For deallocation, call the destructor and add the chunk to the free list.

This strategy has constant O(1) allocations and deallocations, so it’s much faster than the default policy. It also makes it easy to free all the tasks at once, or even tracking memory leaks.

A good library for this is the Boost Pool Library. At the cost of having to compile against Boost, it provides the most flexible pool implementation. For example, from the interface documentation:

void test_pool()
    boost::pool<> p(64);
    for (int i = 0; i < 100; ++i)
        void* const chunk = p.malloc();
        // Use in-place construction.
        Task* t = new (chunk) MyTask;
        // Optionally call;
    // All chunks are released here.

It’s easier to implement this pooling strategy into the engine if all the memory allocations are centralized in one location. Then you need to work out if it’s better to have a pool per actor, per group, or for all the AI as a whole. Typically, if you allocate a chunk of memory per-actor, its size will depend on the relative complexity of this actors, as you’ll want to give more memory to your best and most visible actors.

Discussion 2 Comments

alexjc on June 20th, 2007

I'm torn by using Boost. In this particular case, it seems there aren't that many alternatives for pooling code -- except writing your own. Boost can really kill your compile times if you're not careful. I know a few commercial projects that specifically avoid it, for this reason mainly. As for open-source C++ code, most users are turned off by it. Any dependency is an extra hassle in practice. Alex

alexjc on June 21st, 2007

Good point OJ. Templates aren't the best thing for code bloat generally, but the modern/generic C++ that boost is written in certainly doesn't help! Like the compile times, you can reduce the impact of code bloat by watching out for it... If you just include one header (e.g. smart pointers), most of the rest of the code of boost that you compiled is never linked in to the final executable. But, as mos said, to each his own! Thanks for your comments. Alex

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!