Article
i/2008/08/oz1

Which is the Yellow Brick Path: Shortest or Best?

Dave Mark on August 6, 2008

In this week’s developer discussion on AiGameDev.com, Dave Mark picks up on a topic that’s been rather popular recently: what’s the best way to do path-finding? Let him know what you think and post a comment below.

Straight ahead seems easy enough, doesn’t it?

In the comments to last week’s column, Querying the Real Minds Behind the Artificial Minds…, Kevin Dill tossed out some things that have been on his mind lately that he would be interested in asking a gathering of top AI developers.. One of his ideas was something that I have actually seen before, yet I haven’t really seen a lot of discussion on the subject. He asked:

“How can we move from finding the shortest path to finding the best path? What does ‘best’ mean? For a dog? For a soldier?”

I find it strange to think that this hasn’t been hashed out more given that pathfinding is so completely ubiquitous in games and, despite that ubiquity, is something keeps garnering significant space in tomes of AI knowledge and learning. That is, while pathfinding may be “solved”, it is by no means “perfected.” How does the saying go…? “Five minutes to learn — a lifetime to master?”

However, what Kevin brought up was not fishing for how to “perfect” pathfinding in the objective sense. He was asking a more ephemeral question. How to “perfect” pathfinding in the subjective sense. Christer Ericson of RealtimeCollisionDetection.net recently touched on this problem in a blog post where he suggested that the extra clock cycles we burn to find the exact shortest path are perhaps a waste of time on what amounts to diminishing returns. He pointed out:

“…much too much effort is spent in games in finding the shortest paths! There is a near obsession with admissible heuristics, which is completely misguided! Who the heck cares about the shortest path?! In our everyday lives we rarely, if ever, take a shortest path. Instead, we often optimize for search effort, taking a path we’re familiar with (which we’ve chunked or otherwise memorized so as to require no search). Well, the same applies to games and the A* algorithm.”

Toto’s pathfinding failz… or is it just because he’s a dog?

Interestingly, I think Christer’s point solves Kevin’s question to an extent. In the search for extra clock cycles, Christer made the point that people are rarely perfectly efficient… and that’s what makes us look like… well… us. It would also be what makes a dog look like a dog, as Kevin asked for. A soldier is going to move a bit differently through an environment than is a layman. A tourist is going to move slightly differently than a businessman. Someone unfamiliar with their surroundings is going to path a bit different than someone on mental autopilot. Someone suspicious or frightened of the unknown will move differently than someone who is confident… or for that matter, differently than someone to be suspicious of. Some of the above can be accounted for with animation, of course. However, there is a layer of subtlety that can be achieved with the pathfinding itself.

So I guess I want to co-opt Kevin’s comment from last week and ask you folks a series of questions:

  • First, is there any benefit computationally to not calculating the best path and simply settling for something that looks “good enough?”

  • Second, is there any benefit from a design standpoint to going out of our way to intentionally tweak paths for different characters — or even types of characters?

  • Third, in doing number two… what sort of features — from a design standpoint — would this “best path” entail?

  • Fourth, in doing number two… what sort of challenges — from a technical standpoint — would this approach create?

(Of course, a conversation around that may amount to a whole new Pathfinding section in the next AI Game Programming Wisdom book!)

Don’t you find it odd that we all arrived here at the same time? We must all be using the same heuristic value!

Discussion 11 Comments

Kevin Dill on August 6th, 2008

We already typically don't find the absolute optimal path. Any time you abstract space into large chunks (for example, with a navmesh or even an old-fashioned navigation graph), and then path plan along the chunks, you sacrifice some amount of optimality. There is a tradeoff between the size of the chunks and the optimality of the path. So "good enough" is actually the norm. Just accepting non-optimal paths isn't really what I was getting at, though. I picked the dog and the soldier as my two examples somewhat carefully, because they each have distinct problems, but they are very different problems. Typically what we do is to compute a path, which is a series of straight lines from waypoint to waypoint. Next, we smooth the path in order to remove any unnecessary waypoints. Finally, we round off the corners somehow, either using a spline or computing the curve necessary to transition from line segment to line segment based on the character's minimum turn radius. From the point of view of the dog character, this results in what we like to call "dog on rails." The problems isn't that the overall path which results from this process is optimal or nearly optimal. The problem is that while a real dog would follow roughly the same route, he would almost never walk in a straight line along the route. He would stop to sniff at the dirt. He would rub up against the sofa as he goes by. He would detour to make a big circle around the vacuum cleaner (which is scary to him). He would try to stay on carpet as much as possible. What's more, those are just the things that *my* dog does. A different dog would have entirely different preferences. We can use steering forces to add a little bit of random wander, this is well known. But the dog's wander isn't random. It is PURPOSEFUL. If you watch a dog over time, you'll see his preferences and his purpose in the detours he takes. They make sense to you, and are a part of your mental model of his personality. I don't know how to solve this problem. I have some ideas, but if anybody knows of things that work (or things that don't work), I'd love to hear about it. A soldier has a different set of problems. Where the dog runs into problems as a result of the way we do path *following* (i.e. the way we smooth the path and round the turns), the soldier runs into problems with the path planning itself. Military guys generally consider three major factors when planning a route: trafficability (what units can move there?), visibility (where can I see from the route, and where can I be seen from), and fields of fire (where can I shoot from the route, and where can I be shot from). Game AIs consider only one factor: distance. You'll notice that the sets of factors considered are disjoint. That strikes me as a problem. Military guys can also do things like attack from more than one direction. In fact, pretty much every strategy I learned in the infantry ended with the enemy in a cross fire one way or another. This might mean that a large force moved up along a single route and then split to outflank the enemy, or it might mean that the force split early and then maneuvered until the two forces were in flanking positions. Military guys will also often look for roughly parallel paths for different units to move along, so that you don't end up with all of your eggs in one basket. If one force is ambushed, the others can either run like hell, or come to their aid, as appropriate. We can sort of kind of get the soldier to work, in a mediocre way, by adjusting your path costs, so that paths that are likely to come under observation, or which have poor trafficability, are more expensive than paths which are clear of these issues. There are a couple problems with that. Most significantly, if you're observed at even one spot along the path, then the damage is done - so it only takes one short segment where you're likely to be spotted to make a path bad (but not untenable if nothing better exists). Similar issues exist with trafficability and fields of fire. The end result is that this technique becomes extremely difficult to balance, and the results that you get out of it are often fairly mediocre. Another problem with adjusting your heuristic is that it greatly increases the cost of pathfinding. This makes intuitive sense. It increases the number of possibilities you need to explore to look for a better path. It also makes mathematical sense. If your heuristic assumes movement along perfectly optimal path, then it will be significantly more optimistic (and thus significantly less useful) than a heuristic which was closer to accurate. A better solution seems to me to be to use something like Dijkstra's algorithm to find *all* paths within a certain *length* (however long you're willing to allow travel to take), and then score each path using an algorithm that's more complex than simply putting a cost on each segment traveled. This would allow you to not only find the "best" path, but also to find the N best paths, and send some units along each of them.

Kevin Dill on August 6th, 2008

I should have said... obviously, the biggest problem with all of the above is that it's significantly more expensive than simply finding the shortest path - and we can oftentimes barely get enough cycles for that. Which ties in nicely to the discussion related to what's up Intel's sleeves...

JonBWalsh on August 6th, 2008

For the dog example it sounds like that's a problem related less to the actual path finding than it would be to the agent's objective or goal. As a dog approaches objects of interest (vacuum cleaner, something to sniff at, etc.) you would need to create a higher priority goal for the agent to respond to the object. [quote]First, is there any benefit computationally to not calculating the best path and simply settling for something that looks “good enough?”[/quote] The benefit I see is control. "Good enough" is rather vague and seems like it'd be something a little harder to predict. I'd be worried about giving my AI this kind of pathing. Most of the time good enough would be well... good enough but something that could look "good enough" to a computer could look completely ridiculous to a player. I still think it'd be worthwhile to implement a path finding that created believable paths without using as much CPU time (using sub-optimal routes) but there are some benefits to knowing your path is optimal.

hellokeith on August 6th, 2008

WRT path distributions.. The addition of waypoints is a nice way to distribute path selections. The waypoints of course must be on navmesh polys. So the npc now has the options: 1) A* from [origin Y] to [destination Z] 2) A* from [origin Y] to [waypoint S] and then A* from [waypoint S] to [destination Z] 3) Connect a series of waypoints and A* between them, with [destination Z] as the final goal If the vacuum cleaner were a static object, then just a few connected waypoints could be used to get the dog to circle it. I know this isn't glamorous, but it could still be convincing. A more dynamic method could be a threshold, so if paths A, B, & C are all within 85% distance or cost to the destination, then they are all valid and the path can be selected at random or based on further tactical/strategic criteria. As far as humanizing pathfinding, I don't know about the rest of you guys, but I generally choose between the paths in front of me, even if turning around would be more efficient distance-wise. :)

PaulTozour on August 6th, 2008

[LIST] [*][I]First, is there any benefit computationally to not calculating the best path and simply settling for something that looks “good enough?”[/I] [/LIST] Sometimes, yes, but with the direction the hardware is heading, that kind of minor performance issue is going to be irrelevant soon. In any event, there are usually better optimizations. [LIST] [*][I]Second, is there any benefit from a design standpoint to going out of our way to intentionally tweak paths for different characters — or even types of characters?[/I] [/LIST] Yes, absolutely. The knight on horseback who moves in long straight-line charges, the drunken guard who staggers around semi-randomly while walking, the clever flying robot who dodges left and right as he flies, the soldier on a motorcycle who often has to take wide turns, or do 270-degree loops in the other direction to make a tight turn ... All of that adds lots of personality to the characters. [LIST] [*][I]Third, in doing number two… what sort of features — from a design standpoint — would this “best path” entail?[/I] [/LIST] That depends on the design :) Sometimes it's as simple as adding random or sinusoidal "wobble" to the best path (as with the drunken guard and the robot in my example above). Other times, you really need to add something to your core algorithm, but there are more design possibilities here than anyone can describe in advance. In the case of, like an elongated 18-wheeler truck that occasionally has to make a 5-point turn in the city streets, it requires a search of a much larger configuration space. Thankfully, most game designers don't put 18-wheelers in their games. [LIST] [*][I]Fourth, in doing number two… what sort of challenges — from a technical standpoint — would this approach create?[/I] [/LIST] It makes path planning a lot harder. The motorcycle that has to do a 270-degree loop to the left to make a 90-degree right turn, for example, requires extensive tweaks to the search algorithm and ends with a path that's much longer than the shortest path for a human soldier. In the long run (10-20 year time frame), I like to think about pathfinding in terms of an overall planning architecture. Imagine a game where AIs can not only [I]find [/I]paths, but [I]create [/I]them. Imagine a bombed-out building in a war zone, and AIs can lay planks across openings to walk over them, stack boxes to get up onto ledges, tie one end of a rope around a brick and use it as a grappling hook and shimmy across the rope, destroy parts of walls to create new paths ... Having AIs play MacGyver like that isn't necessarily fun in and of itself, since it doesn't directly involve the player as I described it ... but it could lead to some new gameplay possibilities.

diegix on August 7th, 2008

[QUOTE=PaulTozour;4076] [LIST] [*][I]Second, is there any benefit from a design standpoint to going out of our way to intentionally tweak paths for different characters — or even types of characters?[/I] [/LIST] Yes, absolutely. The knight on horseback who moves in long straight-line charges, the drunken guard who staggers around semi-randomly while walking, the clever flying robot who dodges left and right as he flies, the soldier on a motorcycle who often has to take wide turns, or do 270-degree loops in the other direction to make a tight turn ... All of that adds lots of personality to the characters. [/QUOTE] Sometimes it's not a "nice to have" feature, but a required feature by design. Having characters than can climb walls, fly, go underground, walk, etc. in the same game requires to have a different pathfinding solution for almost each type. However it's mostly a case specific thing. It would be great to have a parametrized pathfinding that you can configure to be more cautious, more scared, etc. and it will adjust the resulting paths accordingly. We can probably do most of that stuff by adjusting costs with more or less complex heuristics, but it requires a lot of work and tuning. From the example of the dog I will probably go with finding a regular path to his final destination and then, while the dog is going there, he will be examining his surroundings for interesting things and adjust the path to approach them, if it's something just to pass closer of farther away, or just add another behavior on top of the current one to examine some close smelly shoes (after all you probably want to put some specific animation, sound, etc. there). All of this taking into account distance to the path to his final destination so he won't be distracted by something too far away. [QUOTE=PaulTozour;4076] Imagine a game where AIs can not only [I]find [/I]paths, but [I]create [/I]them. Imagine a bombed-out building in a war zone, and AIs can lay planks across openings to walk over them, stack boxes to get up onto ledges, tie one end of a rope around a brick and use it as a grappling hook and shimmy across the rope, destroy parts of walls to create new paths ... [/QUOTE] You probably can do pretty much everything there annotating your world with specific pathfinding requirements. A simple example would be the requirement of opening the door to reach the other room, which can be extended by having to find a key to open it or a bomb to blow it up... Having AIs play MacGyver like that isn't necessarily fun in and of itself, since it doesn't directly involve the player as I described it ... but it could lead to some new gameplay possibilities.[/QUOTE]

PaulTozour on August 7th, 2008

[QUOTE=diegix;4079] You probably can do pretty much everything there annotating your world with specific pathfinding requirements. A simple example would be the requirement of opening the door to reach the other room, which can be extended by having to find a key to open it or a bomb to blow it up...[/QUOTE] Yes, you're right that you can do that with most games, but for that example I'm thinking more in terms of a completely dynamic world. I'm thinking more blue-sky, distant-future type stuff: physics objects everywhere, almost everything is destructible. At that point, when the entire game world can be dynamically rearranged, destroyed, rebuilt, and so on, annotations in the game world itself are almost useless. You can only safely annotate at the level of the most atomic objects, that can't be subdivided or mutilated further -- say, the invididual bricks in a brick wall, assuming those bricks can't be destroyed.

Dave Mark on August 7th, 2008

[quote=PaulTozour;4103]At that point, when the entire game world can be dynamically rearranged, destroyed, rebuilt, and so on, annotations in the game world itself are almost useless. You can only safely annotate at the level of the most atomic objects, that can't be subdivided or mutilated further -- say, the invididual bricks in a brick wall, assuming those bricks can't be destroyed.[/quote] Which makes it fun to automatically generate a mesh for the resulting [I]pile[/I] of bricks that would be created - and certainly climbable. How granular do you need to be? Do you need a handfull of points for the pile or one for each brick? All in all, it's just a... 'nother brick in the wall. (Please tell me you saw [I]that[/I] coming, Paul.)

Dave Mark on August 7th, 2008

[quote=Kevin Dill;4057]From the point of view of the dog character, this results in what we like to call "dog on rails." The problems isn't that the overall path which results from this process is optimal or nearly optimal. The problem is that while a real dog would follow roughly the same route, he would almost never walk in a straight line along the route. He would stop to sniff at the dirt. He would rub up against the sofa as he goes by. He would detour to make a big circle around the vacuum cleaner (which is scary to him). He would try to stay on carpet as much as possible. What's more, those are just the things that *my* dog does. A different dog would have entirely different preferences. We can use steering forces to add a little bit of random wander, this is well known. But the dog's wander isn't random. It is PURPOSEFUL. If you watch a dog over time, you'll see his preferences and his purpose in the detours he takes. They make sense to you, and are a part of your mental model of his personality. I don't know how to solve this problem. I have some ideas, but if anybody knows of things that work (or things that don't work), I'd love to hear about it.[/quote] Instinctively, I want to do something along the lines of another use for A* - and that is planning algos. In this case, what maps over is the concept of "continual re-plan". As the dog heads out for his ultimate destination (i.e. not "death", just whatever he has in mind), he is going to get distracted along the way by the things you mentioned. Most of them don't take him too far out of his way, but they are convenient curiosities along the route. He doesn't plan (so to speak) for them when he originally sets out, he has to splice them in as he goes. In order to accomplish this, we need to use somewhat of an HTN planner involved. This could be extended in a lot of ways. We already re-plan paths when stuff gets in the way that invalidates an originally legit path. However, can we use it (for humans) for other things? (Not sniffing "the dirt" or watering things that don't need to be watered, however... that would be odd.) For example, while walking down a hall, perhaps a slight angle toward and hesitation at a window or door to glance in as you pass by? Or a slight angle away from a busy entry to avoid a blind collision of someone coming out? [I]Those[/I] seem to be the subtleties that make for the more believable, "non-rails" paths.

PaulTozour on August 8th, 2008

[QUOTE=Dave Mark;4107]Which makes it fun to automatically generate a mesh for the resulting [I]pile[/I] of bricks that would be created - and certainly climbable.[/QUOTE] Exactly ... much like the Kynapse demos (which as far as I can tell, is not a technology that's actually working in their middleware yet). The more interesting problem is to think about how an AI would rearrange bricks (and other oddly-shaped debris) in order to create the nav mesh it wants. [QUOTE=Dave Mark;4107]All in all, it's just a... 'nother brick in the wall. (Please tell me you saw [I]that[/I] coming, Paul.)[/QUOTE] Yes ... Yes, sadly, I did ... :(

smanier on August 8th, 2008

I don't have the time to read all your posts but it seems to me nobody talked about multiobjective A*. To explain this idea, here is the abstract of the paper "Iterative deepening multiobjective A*" : [I]Many real-world optimization problems involve multiple objectives which are often conflicting. When conventional heuristic search algorithms such as A* and IDA* are used for solving such problems, then these problems have to be modeled as simple cost minimization or maximization problems. The task of modeling such problems using a single valued criterion has often proved difficult [6]. The problems involved in accurately and confidently determining a scalar valued criterion on which to base the selection of a most preferred alternative have led to the development of the multiobjective approach to alternative selection [7]. In [7], Stewart and White have pre- sented a multiobjective generalization of the popular A* algorithm, the MOA* , which uses heuristic based best first search techniques to generate all non- dominated solutions. Like A*, MOA* also has exponential space complexity. Depth first search techniques use linear space, but take too much time and do not lead to an admissible algorithm.[/I]

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!