Everything that happens in a virtual world is called an event. It’s represented as a data-structure that contains all the information needed to specify what, why, and who the event happened to. For actors to behave realistically, they must react and potentially adapt to every such piece of information.
An event loop is a good way to solve this problem. This design pattern involves two things:
A loop that iterates over all the incoming events one by one.
A mechanism to dispatch events to their corresponding handlers.
This pattern is also known as a message pump, as events can be implemented as messages. Most user interfaces and operating systems use this concept.
Dispatching Events to Handlers
The only tricky part of the event loop is matching up events with handlers dynamically. There are a few important questions to ask yourself:
Which handlers are valid for each event? Events have different types and dynamic data, so filtering must be done at runtime.
Can certain handlers stop events from being dispatched to other handlers?
What order should valid event handlers be processed in? Does this order depend on dynamic data from each event?
These decisions can have complex ramifications, so you’re better off picking the simplest solution first. Have only one global (composite) handler that’s responsible for finding a child handler to process the event. Run only one specific handler per event to prevent clashes (and prioritization problems), but let that handler call others if necessary. Add dynamic handlers to the global composite as necessary.
With this approach, most other event dispatching strategies can be implemented later as a special case.
Integrating the Loop with the Scheduler
Unlike operating systems and user interfaces, you can’t afford to run an AI event loop all the time in the background of your game. So instead, the implementation must integrate with your scheduler so that the events are processed only when the AI is ready to update.
If the scheduler is implemented with a work queue, then event handlers can be appended to the back of the queue as new messages are encountered. During the following update, the work queue would already be seeded with tasks that deal with each new event.