Article
files/startingline

Behavior Priorities: Avoid Them or Embrace Them?

Alex J. Champandard on November 27, 2007

Most AI programmers I know (myself included) have a love-hate relationship with priority-levels for AI behaviors. Developers often use them reluctantly — assuming there must be a better solution out there. These priorities are the subject of this week’s developer discussion.

Basically, the idea is to assign priority levels to every AI behavior to help decide which is most important when they are applicable at the same time, or execute concurrently. Simple right? Well, things get tricky for multiple reasons…

  • Maintaining a standard convention for the meaning of priorities is necessary, but hard to achieve in practice.

  • Priority levels always seem to run out when you make the mistake of picking the wrong value for an important behavior!

  • Sometimes, finding static priorities values for a behavior is a challenge in itself!

How do you resolve this problem in practice?

  1. Do you need/use priorities for your AI behaviors?

  2. Are you happy with this approach?

  3. Have you found something better?

Personally, I split up behaviors with dynamic priorities until I can assign a static value to them (like in Halo). Then I try to keep the designer-editable priority levels down to a minimum, for example HIGH priority for highly functional behaviors, and LOW priority for cosmetic behaviors like reactions to events. In the end, I get the AI work out the relative priorities by analyzing the behavior tree, but that approach has its challenges too!

What do you think about behavior priorities? Do you have any advice to share on this subject?

Discussion 10 Comments

Ian Morrison on November 27th, 2007

The way I'm experimenting with now, different behaviours all have dynamic priorities to take into account the quickly changing state of the world in my game. It's necessary since I use something similar smart object approach where the AI needs to consider many very similar behaviours at once. The approach still has the problem of consistent conventions for priority (which becomes greater, since priorities need to hold up across wildly different actions, such as dodging versus finding cover). I partially solve this (though it's an ongoing struggle) by splitting up the priorities across multiple goals. For example, two of my goals are making the tactical situation better and avoiding dangerous situations. They tend to counterbalance each other, with actions that are both safe and dangerous to their opposition getting high values. This setup has the added advantage of being able to scale goals by personality or emotional state. If I want an AI to become more aggressive when hurt, I can just double the relevant goal when their HP drops below a certain point. I've been fairly happy with this approach so far, since it's modular and generic enough to apply to a whole range of actions and situations. It also has made for very solid opponents who are fairly unpredictable. On the downside, there's a lot of tweaking involved in fine tuning behaviour. It's also hard to debug, but that's partially because I haven't given myself many tools yet. Another downside is speed, since individual AIs can take up several milliseconds per frame evaluating priorities once sufficient objects are in the environment (projectiles being the worst culprit)

Andrew on November 28th, 2007

I suppose the question is geared towards combat, but it'd be good to say priorities, if applied to non-combat situations, can prove to be pretty good - Oblivion (although vastly cut down) follows some form of priorities, as do other games with NPC's doing non-combat "ambient" stuff. In combat, it's more difficult - it's one of many solutions from my research/experience, and best combined with other approaches to determine behavior (either on the fly or during design). Certainly if you want a creature to retreat/run around like a headless chicken when their leader dies, it's likely easier to program it as a priority with a condition, and for those things yeah, priorities make perfect sense. In other cases they don't - tactical squad orders/behavior would benefit from a broader process of behavior/intelligence use then a single entities current priority (if the squad leader wants to keep safe behind cover, the rest of the squad might need to flank the enemy :) ).

Dave Mark on November 29th, 2007

I would suggest looking at the reciprocal of your question. That is, what do we achieve by NOT assigning priorities to behavior decisions? The answer is, "the same bloody thing every time". That is, our behaviors are predictable and static... even if the entirety of the situation doesn't warrant a particular action, our agents will do them regardless. What we are missing, then, is the potential for dynamically shifting behaviors based on a multitude of different inputs. Really, it is similar to how we now expect dynamic pathfinding in a shifting world. We fully expect our agents to be able to process the world around them - even as it changes - and take into account the subtle variances in how they need to get where they are going. The same can be said for other agent behaviors. If we are so enthusiastic about creating a perception and weighting model for pathfinding, why do we hesitate in creating a perception and weighting model for other decisions?

Dave Mark on November 29th, 2007

As a self-serving note, my contribution to the upcoming book "AI Wisdom 4" (Alex has links to it around here someplace), covers exactly this issue. The [B]"Multi-Axial Dynamic Threshold Fuzzy Decision Algorithm"[/B] is a class and technique for allowing you to combine 2 or more issues with a continually changing threshold. It makes for a powerful way of changing the balance between priorities on the fly based on changing circumstances. Ian, your last paragraph would be addressed by utilizing the MADTFDA. Go buy the book! :)

Ian Morrison on November 29th, 2007

I dunno, Dave, those are a lot of big, scary words. ;) Thanks for the recommendation, I'll check it out. It's been a while since I added anything to my game development library...

alexjc on November 30th, 2007

Interesting argument Dave; I'm certainly curious about your article! Actually, I was thinking on a much [B]lower level[/B]. I see priorities as a way to provide regularity and order in the [B]arbitration[/B] of behaviors — rather than diversity and richness. [LIST] [*]Without priorities behaviors are most likely run on a first-come-first-served basis, which could be as unpredictable as the environment itself! [*]Using priorities, you get the opportunity to impose fixed rules by saying: this behavior (e.g. sprinting) will always interrupt lower level behaviors (e.g. reloading). [/LIST] Your point makes sense from the perspective of drives and motivations on a [B]high-level[/B], as a means for [B]selecting[/B] behaviors rather than arbitrating between them. Never thought about it in so much detail, but that's the great thing about these discussions! Alex

Dave Mark on November 30th, 2007

In that case, I would also insist that priorities be used. Your example is a good one - but I can trump that. How about "dying" always interupts everything else? I recenly saw an analysis by a designer (not sure if it was here) where they DIDN'T allow for priority interupts of animations. So, if an agent was working at a desk when he got shot, he simply kept working. When he was done with the working animation, he would stand up and promptly keel over and die. That is a more obvious example, but it makes the point. There are certain things that have to take precedence over others. As for the higher level behaviors, that is usually what I am most concerned with. Also, I believe that the higher decision making is one of the weaker parts of game AI right now. AI that is tied to animation has been moving right along in order to keep pace with the advances in graphics/animation technology. After all, the pretty models aren't so pretty if they don't use their animations correctly. However, the high-level decisions that guide those [I]series[/I] of animations on a macro level still lack a lot of depth. Anyway, since that is my focus, that is the way I took the idea of "behavior priorities". (And yes, this is a good discussion tool. We need a message board, however.)

alexjc on November 30th, 2007

That example is from Jeff Orkin about the state machine in NOLF, as described in his F.E.A.R. papers. So, a good follow up question would be: is there a link between selection priorities and arbitration priorities?

Jare on November 30th, 2007

The way I tend to think of priorities is as an enforcement tool that lets me select what MUST happen, whereas weights are an coloring tool that lets me say what SHOULD happen. If I'm low on health I should go find a medikit, but I can do other things. If I'm killed I MUST die and nothing else makes sense.

Dave Mark on November 30th, 2007

Slightly off topic... we may want to decide on a definition of terms. I tend to think of the immediate decision process - especially things tied to specific animations - as "actions" and higher level decision making processes as "behaviors". I know that is what got me a little confused earlier... what do other people refer to them as?

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!