The most common — and arguably most important — trick for optimizing search algorithms is reducing the size of the graph. If you can search a small one, why not do that? For pathfinding, this has lead to many techniques for better triangulating navigation meshes, reducing the number of nodes without compromising path quality or validity.
In this article, we'll dig into a new feature in the most recent version of PathEngine that optimizes the case for highly dynamic obstacles that move around often, and could be expensive to update each frame. How can this be done efficiently? How does this impact the size of the graph? What's the result on the expanded search space?
The Cost Of Dynamic Obstacles
In complex dynamic environments, (large) obstacles that must be considered in the navigation mesh (not just avoided locally) can be expensive to deal with. Often, you'll need additional computation to insert these obstacles into the search graph using CSG (Constructive Solid Geometry), which not only takes time but also makes the graph significantly larger.
Thomas Young, founder and pathfinding guru, explains how PathEngine handles this:
“PathEngine's approach to pathfinding obstacles is an important SDK feature, and something I think we do well.
This is something that's difficult to get right and a bit of a bugbear for a lot of pathfinding systems, in particular where different representations are used for local and global navigation. Discrepancies can lead to some nasty fail cases for higher level behaviour code, often without there being any easy way for this to be resolved in a satisfactory way at that higher level. (So it's really worth taking some time to engineer this into the system up front!)
PathEngine's obstacles are 2D (projected) shapes, placed on an underlying pathfinding ground mesh. We can either integrate shapes into the actual pathfinding space boundaries (in a kind of CSG combination operation), or they can be applied to individual pathfinding requests as 'very dynamic' obstacles, as a set of corners + collision checks against the placed obstacle. The delayed attach optimisations apply specifically to this second scenario, for very dynamic obstacles.”
Regardless of what technique you use, integrating dynamic obstacles into a mesh is never free. The best policy, then, may be to not integrate them into the mesh at all!
Lazy Attached Obstacles
The idea of attaching obstacles lazily is to avoid doing computation if it's not required, since the engine is expecting that obstacle to move in the next frame anyway. In the screenshot above, you can see that only the obstacle in the doorway is expanded while the ones in the middle of the room don't affect the path and are left alone.
Thomas Young explains how this is implemented under the hood:
“When applied as 'very dynamic' obstacles, obstacle shapes are only connected to the graph if and when this turns out to actually be necessary for that specific set of path search criteria. Specifically, obstacles are only added if and when they obstruct a path connection that has been fully validated and is about to be closed as an explored node.”
Examples: Before and After
In these screenshots, you can drag your mouse to see the extent of the pathfinding search with delayed attached obstacles and without. The results are always better, in some cases more drastically than others!
- The 'delayed attached dynamic obstacles' optimisation is something that is useful in essentially all searches where dynamic obstacles have been placed, with the speedups getting more significant as more obstacles are added (and for situations where dynamic obstacles are 'scattered' over a pathfinding environment).
- The 'delayed attach for static graph' optimisation gives us a speedup only in certain particular situations, but with the speedups then tending to be very significant for those specific situations.
Q: Are the results always the same with the lazy insertion?
TY: There's no path quality trade-off from these optimisations (unlike the 'small convex obstacle', and similar optimisations, discussed a bit in here). The delay attach optimisations are essentially transparent to external code, and don't affect path length guarantees. (So, assuming the small convex optimisation is not turned on, you are still guaranteed shortest path with these optimisations.)
Note that turning this optimisation on or off _can_ change the path result for certain specific situations, however, where there is more than one path from start to goal with the same globally shortest path length. This is for the same reason that changing certain details of the internal search implementation such as the priority queue structure or the ordering of graph connections can change the path result in the same situations.
Q: Does this take extra processing to make this happen?
There is a bit of extra book-keeping to keep track of which graph components have been attached, and some extra details relating to underlying collision checking methods, but the cost for this stuff is really pretty minimal.
Q: How are cases when obstacles block doorways, causing the A* search to have to back track, in terms of performance?
I haven't measured this case specifically, but I'd expect this to be fairly similar in performance to the same situation _without a doorway_ and with a dynamic obstacle placed nearby, since most of the cost for this kind of situation is likely to relate just to the fact that back tracking is required, and with the need then to explore connections in other directions.
Q: Can this approach be any worse than not inserting obstacles, if so in which cases?
Since there is some book-keeping cost, and because it is cheaper to do things in a kind of batch operation than individually (for processor caches and so on) cases where all or nearly all the graph components present eventually get attached would be slightly faster without the delayed attach optimisations.
For situations where there are a bunch of dynamic obstacles placed (which is what we are targetting with these opts) I think that this is unlikely to happen very often, however. And note that it is quite common for dynamic obstacles to remain unattached even in cases where we force the whole search space to be expanded!
Q: Why does the points of visibility approach make implementing this feature easier?
I guess because of the way that 'completely dynamic' obstacles are implemented for points of visibility pathfinding, i.e. we just add some additional points of visibility, plus some collision constraints that apply to other connections. Note that there is then no rejigging or rerouting (or retriangulation or whatever) required for other connections around the obstacle. So the existing, potentially already partially explored, graph remains valid.
Having said that, the idea of doing this kind of delayed attach for graph search is really quite a general idea, and it should definitely be interesting to think about ways in which this could potentially be applied to other kinds of pathfinding graph search...
Screenshot: Finding a path between buildings with highly dynamic obstacles in between, as shown in the before/after comparison above, but with only the final path drawn in green. How much more searching (or melding of obstacles into the navmesh) does your solution take to support this use case?
Many thanks to Thomas Young for taking the time to answer these technical questions and sharing the underlying implementation details of PathEngine! See PathEngine.com