Article

Task Queue in the Scheduler

Alex J. Champandard on June 17, 2007

A scheduler is a great way to centralize the management of tasks. However, to get the most benefit out of it, you’ll probably need a combination of a couple tricks: a work queue and the command pattern. These give your AI engine the following benefits:

  • All the AI processing is handled in a uniform way; no special cases.

  • The C++ stack size does not depend on data created by the designers.

  • Large behavior trees can’t cause the AI engine to suffer a stack overflow.

  • You can interrupt the AI at any stage of the execution and resume later.

  • It’s easy to limit the amount of processing to be done each update.

To implement this, you need a queue to store tasks and a loop that processes the queue until it’s empty. This is a combination of two software patterns…

Command Pattern

Typically, a Command is an object that exposes a simple interface for executing arbitrary chunks of work. The Task class is very similar, except that it can be executed latently over multiple updates. So for AI, you’ll want to use that instead, as it’s allow you to implement long running behaviors.

The advantage of having a standard interface is that all the tasks can be treated in the same way. So no matter what functionality needs managing by the scheduler, everything will be O.K. as long as you can implement it as a task.

Work Queue

A work queue is a list of commands to be run. Most often, this happens in no specific order on multiple hardware threads selected from a pool. In the case of an AI scheduler, however, it’s best done in order on a single thread, using the latent execution of Task objects to provide cooperative multi-tasking.

There are two parts of the code that deal with the work queue:

  1. Seeding the work queue with initial tasks.

  2. Processing the queue in a loop until empty.

This process is repeated every update of the AI.

Implementation

The scheduler maintains an array of active tasks and data necessary for execution (e.g. the status code, and observer). This array is used by the prepare() to seed the work queue each update. Then the step() function is called repeatedly to pop an item from the front of the queue and process it.

class Scheduler
{
public:
    void prepare();
    bool step();

protected:
    // Execution environment.
    struct Entry
    {
        Task* task;
        Status status;
        /* ... */
    };

    std::deque m_WorkQueue;
};

The Standard Template Library deque class supports all the features you need for this, as it can be extended at the front and the back if necessary.

Notes and Advice

Because new tasks are mostly run while the scheduler is updating, it’s easier to separate the list of active tasks from the queue of tasks to be processed during this update. (It’s a form of double buffering that can save quite some headaches.)

Also, if priorities are used for the tasks, then the scheduler can provide a customizable policy for sorting the work queue. For example, you may want to have tasks of certain types to be executed last, or have a form of layering in the AI behaviors.

Finally, since tasks are very flexible, they should be used for literally everything:

  • Handling incoming events from the world. Simply create a handler for the incoming message as a new task, and add it to the front of the queue ready for the next update.

  • Processing complex task observers. (Observers should not be recursive for the same reasons that tasks aren’t handled with the C++ stack.)

Once you have this construct in place, you’ll find even more uses for it than you expected!

Discussion 0 Comments

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!