Alex It's the second part of the talk that Remco started. My part is split into 2 subsections. The first is more the software architecture- what's necessary to basically get everything to work, and then the second part will be adding lots of intelligence and making sure that it behaves in a strategic way.
The first part is the software side and the big challenge is, the ones that Remco mentioned earlier: there's a huge variety of different configurations. The different number of bots that are possible in the game can be 1 bot, if all the other bots have been kicked out by human players in a online game, there could be one left over. Two bots on each side is the default configuration when you start an online game. When you're playing offline, there are 8 bots as a default. One of the use cases that we had in mind was two experienced players against 14 bots. The whole system needs to support that variety of behaviors between one bot, up to 14 bots. You have to deal with that strategically and figure out how to best organize these bots. Obviously that means that we have to figure out the best number of squads, and how to assign these squads, and how to put bots into squads.
Having a fixed policy just won't work, as Remco mentioned initially, we started off with just having the offline bots that were there to teach the player how to play the game, and with that kind of system, it's easy to say: OK, we're going to start off with these 8 bots, we know which ones they are, and we're going to fix them into 3 squads, and we know exactly which squads are assigned to what. But when you start throwing in this online playing to the mix, and the bots can join and leave at any point in time, it just doesn't work. You can't have a fixed policy.
Comparison with HALO 3
About the time we were solving this problem, was when Damian Isla did his talk at GDC 2008. Damian Isla is the lead programmer on HALO 3 and he was on the HALO 2 as well. What Damian talked about is how the AI objectives are used in HALO 3 to essentially assign single player enemies into different battle positions. But what Damian's really good at, is the ability to turn what could be boring software engineering problem into a fascinating NP hard, NP complete AI challenge. That gave us all a hope in turning what could've been quite a boring task into something much more interesting. *laughter*
The big difference is, between the two games, is that there were many differences between what Damian was using on HALO 3 and what we were trying to do with the multiplayer bots in KILLZONE 2. One of the big differences was in HALO 3 for instance, they're assigning disposable NPCs, so they just live once. Whereas in KILLZONE, and KILLZONE 2, the bots respawn once they die, so essentially, it gives us a fully persistent online bot. In HALO 3, they use this kind of high level objective management for level specific use, whereas we had to write the peace of code once that would apply to all the levels. In HALO 3, they use it as part of a design tool, whereas we were more interested in, making it possible for us to support all the wide variety of configurations that we needed. And HALO 3 was a very data driven approach, whereas we were mostly based in the code, as we said the designers specified the behaviors but they were actually implemented by the AI team. So it was mostly a code that had a few declarative interfaces. Whereas on HALO 3 this system was mostly story driven, deciding how to move the NPCs around according to the current story. In KILLZONE 2, they're more strategic.
In practice, there were many differences between what we were trying to do and what Damian's system achieved, but the main idea that we kept, was this concept of separating what to do, and how to do it. It's a basic decoupling and that's a very common pattern in Game AI. It's the kind of stuff the Remco talked about earlier, the bots and the squads both have these concepts of separating the goal and how you accomplish the goal, as a plan. We used the same approach for the strategy to make things simpler to manage and to allow us to support this wide variety of bots that we needed in the game.
Remco showed you the strategy, it was the little box in the first part, and I'm going to blow that up and look inside, and show you how it's structured. Essentially at the top here, we have mission specific AI, and this is written in C++, these are classes, they are relatively simple. They gather information from the game's state and then they process that information, and assign objective based on that. For example, "Search & Retrieve" grabs the information from the game state where the current object is that you need to retrieve, and it will send the bots to that location. The same goes for "Search & Destroy" what objects need to be destroyed. And so these turn out to be relatively simple classes. Two of the most interesting one are "Search & Retrieve" and "Capture & Hold". "Capture & Hold" is one that Tim worked on a lot, and "Search & Retrieve" is one of my favorites.
These mission specific AI pieces of code essentially run on a regular basis, and their job is to create objectives and manage the objectives, and I'll talk about them in more detail, but they essentially act as the interface between the mission specific stuff and this common strategy that remains persistent. This is the part of the code that is responsible for making sure that the objectives which describe what to do actually get expanded into tasks or goals for the squad, and the bots. The strategy has the option of being able to create squads, delete squads, reuse previous squads, assign bots to squad, reassign bots, and it has full control of anything in this subsystem here, so the strategy is whatever it takes to make things work.
A big part of the interface between the high level and the low level is the AI objectives. These are designed based on what the squads can provide. The behaviors were built into the squads, and they were already there so, designing the objectives to match what the squads could do made sense, but it also was, looking at it from a high level perspective, what do the missions need to be able to be accomplished, and so, we ended up with four main objectives combined; the offensive and defensive objectives. You can bounce to any point in space, or you could defend, not any point in space, but key locations in space, these are markers which I'll show you later. Those are static points in space. We also found it necessary to add obviously dynamic orders, dynamic objectives. And this was necessary to avoid having to repeatedly say "Hey, go and advance to this waypoint" if you have a moving target, you don't want to be micro-managing the low levels if you have the high level strategy. By having this dynamic AttackEntity and EscortEentity as an order, you avoid having a regular interchange between those two layers, so it gives you a nice granularity in which to interact with the squads.
I'll give you a quick example with "Search & Retrieve." This is a top down view of the map, and some parts of the sky box which have been left in. In the middle here there will be an object, which is some kind of propaganda speaker, you essentially have to capture and bring back to the base. The strategy will set an objective here to capture that strategic object and return it to the base. At the start of the level there will be an objective to advance to that location, if someone picks it up, if it's a friend or an enemy, then it'll be either an escort or an attack objective. The strategy context sentively can change what the objective is. Each of the factions defends its own base, so there's always an objective to defend your current base. I drew these circles a different size because the order objective to advance has essentially got a weight of double what the defend order is, so there are more bots assigned to that particular assignment. I'll talk about that a bit more later.
Sometimes, when the squad is winning or when the squad has possession of the "Search & Retrieve" object, it will advance to the enemy base, and send a couple bots ahead. This whole thing makes the "Search & Retrieve" game very dynamic, and it makes it much more interesting, because the bots are pretty good at defending so just getting them all to defend would be a safe bet, but getting them to attack a bit more makes the whole game a bit more dynamic and fun to play.
Now, an important part of scaling up and down to support 1 bot up to 14 bots is being able to have objectives that are interesting and rich enough to be able to support many bots. If you just assign all the bots to the same objectives, and that's the very simple objectives, then all the bots might just pile up outside that particular area, and you don't want 14 bots just standing in the same area. So we had to provide much more rich and detailed information to help the bots make sense of that objective. We use sub-objectives for that, so basically scale up and down much better. These sub-objectives also, as a side-effect, make it possible to create rich behavior for when you even just have 1 or 2 bots. Having more sub-objectives allows the bots to interpret that more dynamically. For instance, if you have a location to defend, and there are multiple sub-location to defend as sub-objectives, you can have the bots patrol between those sub-objectives.
Another example would be to attack a different location. This is done slightly differently, but it's the same concept, it can be handled with a specific way to attack, an objective could take you to different roads to flank, and to approach that particular objective from a different approach.
Here's an example: "Search & Destroy" on the left, are a couple of explosives, which must be set by the attacking team. This is a squad here that's defending and you can see that's the label of the squad at the top, and that's the label of the bot, so you can tell that there's only one bot assigned to that squad because there's only one label within the top squad. That particular squad is assigned to defend this area here. You can see this particular guy dropped a spawn grenade here, which means that the bots can spawn here. When defending, the bots can drop spawn grenades, to essentially help whoever dies respawn quicker in that location to defend better.
Just to the left of this, there's another squad, and this one's got 3 bots, and you can see the label again with the bots within that particular squad. This squad is defended by placing a turret just here. In this particular case, it's defending a similar door way entrance to prevent the attacker from placing explosives again, these are on the right they're the same explosives. On the left of that screen shot, just down stairs, that was where the explosives were, that's another entry point, and so there's another squad of 3 bots, and again you can see the label, the 3 bots within that squad. That is a debug rendering of the defend objective, so the defending objective essentially has arrows coming out of it and the attack objectives has arrows going into it. This squad here is pretty well defended, because they have a spawn grenade and a turret. That's the kind of defend behaviors you get when you order the bots to defend.
If you have only one squad assigned to all this defending, that one squad will rotate between these 3 different points, and it'll try to do its best to cover all those 3 areas. That describes the objectives and the sub-objectives.
The process of assigning bots to fulfill those objectives is done with a small piece of code. Now, I wish this was an easier and small nugget algorithm that you could take home and implement yourself, but it turns out it was much more trouble than I thought, and it took me a lot more time. There were many more special cases, the whole process of having bots leaving and joining the team, being able to deal with squads that were left over from previous objectives, new objectives, and that basically means that you have a very tricky situation here. So it's not as simple as a algorithm as I would have expected starting out, but this is how it works.
Based on the weights of the objectives, you can calculate the number of bots and then each objective has a preferred squad size, and based on that squad size, you can calculate the number of squads, based on the number of bots. So that gives you the ideal case, step 1. Step 2 would be to create missing squads if there are some missing. Step 3 is to remove extra squads if there are some left over from previous assignments. Step 4 is, I guess the most interesting one, is to pick an objective for each of the squads. If each squad already has an objective, then you reassign it to a different sub-objective on a regular basis, if there are sub-objectives. This gives you the patrol behavior of the squads moving from one sub-objective to another. If the squad didn't have an objective, then we just reassign it to the best possible objective, I'll talk about that in a bit. Step 5, is to unassign bots. If a squad was assign to a different objective, and it's a bit too big, we need to remove bots, and then reassign the bots. All these cases were necessary to deal with all the little subtle bugs that were there when you start including the bots online. That was more work than I expected, but this covers pretty much everything you need.
The previous slide here shows you what you need to do to get the system to work. This is the basic code that's needed. Key points within this algorithm are places where you can make intelligent or stupid decision, and that's where the heuristics come in. The heuristics are there to decide how you can assign, in this case, a bot to a squad. In practice, this is based on the distance, from the current bot's position, to the center of the squad — its center of mass. You calculate the average position of each player in the squad and that gives you the center of the squad itself. You can use the distance to the objective that the squad is going to as an extra criteria in the distance calculation.
In the assignment process, actually the bots are assigned based on preferences to other bots, so a medic will team up with an assault guy, because they're a good combination together, and the tactition will pair up with an engineer. But in practice you don't really notice this. If I hadn't told you, you'd probably wouldn't have noticed. If there's anything to take away from this, it's that a simple heuristic is enough, so this distance based heuristic for assigning bots to squads is more than fine, and keep it simple, there's really no reason to make these things difficult.
As it turns out, the heuristic for assigning squads to objectives is really simple, and it was, I guess, implemented as the simplest thing that we could possibly do at first, and there was no particular reason for changing it, and for improving it. It turned out to be first come first served, which ever squad is assigned first, we keep it. The last heuristic for selecting badges, that's done by design. The first 6 badges are fixed based on the number of badges, and then the remaining badges are assigned semi-randomly, based on which are the best performing bots. We only have one possible sniper, and one possible saboteur in every game, and the other bots, we just fill them in when we need them, so this was another simple heuristic.
A quick example of "Capture & Hold" how that works is next. Tim implemented this as "All the bots defend the areas that they own" and so the objective is set here, and the strategy decides then if it wants to attack an area or not, and it'll pick those areas based on different criteria. I'll talk about that a bit later. These are 2 main objectives that you'll get typically in a "Capture & Hold" game. These objectives have roughly the same weight, which means that the squads and the bots will be distributed evenly amongst these 2 objectives. That's roughly how the assignment works. Then this is a special case, in some cases the strategy will assign single scouts to go and patrol for the other capture and hold area.
That covers part 1. In Part 2 I'm going to talk about the strategic reasoning, this is where the intelligence comes in. The first part was more about the software, architecture necessary to make it work, to make it not break, support all the features we needed, and this is the part that makes it really intelligent.
The strategy is a combination of automatic processing and manual level annotations. Anyone of you that's worked in the game industry knows that you have to start off doing it manually, and then if you need to later, automate it. If it needs to scale up, then you can come back to it later. We started off with manual level annotations, and turns in this part of the code at least, we didn't need to come back and re-visit it. The levels were relatively manageable and there weren't too many of them, even with a downloadable content, it's a relatively low overhead for annotating these levels. I did a couple of them, it didn't take me long, and I'm not very good with Maya at all, so with such a low overhead that we didn't need to automate this part.
Here's an example of the kind of annotations that we add into each level. This is about 1/8 of one of the multiplayer levels. At the bottom here, we have a regroup location, and there's another one that's not rendered up here, same thing a regroup location, and there's two more around this building. This whole building, each of the key junctions around this building there are regroup areas that are manually placed. These are key strategic locations that are useful to control and have, basically if you dominate these kinds of areas; it makes it easier for you to play the game and for the bots to dominate that area.
As well as those regroup locations, we basically added a bunch of snipping locations. These markers are placed very specifically on these buildings, but we are not micro-managing the snipers. This marker indicates a rough area in which the sniper should be, so we're not telling him to stand exactly on that ledge here. The low level AI kicks in and it'll find cover, use a line of sight, will basically use all the richness of the low level behavior that was also in the single player game to find a good location to shoot from. We're just giving the sniper some rough strategic guidance with these markers.
There are a couple more mission specific markers that are added to these levels as well. The big post in the middle here is one of those places where you have to place explosives like I showed you earlier. To get there, there are only two entry points, so those are placed as markers. Again, we're not telling bots to stand specifically on that marker, we're telling the bot this is generally an area you need to control, and that covers the whole staircase here. This is a "Search & Retrieve" object, so when you capture the propaganda speaker "you have to return it here", so that marker indicates that that's a good place to defend for that. Very similarly for "Assassination", we place markers that suggest where the bots should hide. If they're the assassination target, they all run to one of these markers inside the building. There are two entries to that building; one of them is this doorway from outside, and the other one is from a door way on the inside that you can get to from the first level. That's the process of annotating one area of the map, and really doesn't take so much longer to annotate than it did for me to describe it to you. You'd have to do that 8 times, and that level would be annotated.
In combination with that, we used automatic level processing. There's one main reason for this... In practice that's what it looks like. This is the big picture of the strategic graph, it's a bit psychedelic. It actually was multicolored, so it looks like a shower curtain from the 70s, but I turned it grey because it looks more professional. *laughter*
The reason for the strategic graph is to support the strategy reasoning and any kind of algorithm that we'd want to run within the level. We need a high level graph, because we can't work with the low level waypoints, there's just too much information and it's just not strategic. Having this strategic graph in place helps us make sense of these markers. The markers belong to one area, and if we have good strategic areas, then the markers make more sense. This is the part that we combined with the automatic processing to make more sense of the manual annotations.
The strategic graph is essentially a set of areas and each area is a group of waypoints. And these waypoints form a high level graph. If there's a path in the low level graph, then there's a path in the high level graph too, it's nothing revolutionary, you all know about it. This is done using automatic area generation and it's done during export. The down side of this is that, if your graph changes at run time, then it won't be supported. But it turns out that's fine because the only obstacle that could've been dynamic was made traversable like the turrets. You can walk through those for game play reasons, because there were too many exploits I believe.
This is the low level way point network. This is what the bots individual AI uses, and you can see how that's just way too detailed to use on the strategic level. What the export process does is calculate a bunch of areas based on these. This is roughly what they look like. You can see that the obstacles can help define the areas in a way, and that's the better the areas reflect the geometry; the better quality high level graph you get. This is what the strategic graph looks like; that's the shower curtain. In the graph here, the big circles here are not necessarily a good thing; they represent the number of connections between these two areas. Having nice small connections between areas means that we've identified areas with good choke points, so having lots of small nodes around the level is a good thing.
That's the whole point of the area clustering, the algorithm would work relatively well with just random areas, but you end up with low quality areas, and that means that you'll get double the amount of connection between the areas. The better the algorithm we can write for clustering the areas, the better the AI will perform and the faster it will run as well. This is nothing revolutionary, in fact there are at least 3 people I know that re-implemented this before and since, it's been used many times before. [See William van der Sterren's masterclass.] The idea is you start with one waypoint per area, and then you merge two areas together repeatedly until you find a target number of areas. In this case there's 80, I think, areas for each map depending on its size.
There are some key heuristics to take into account. In fact, that's the secret sauce. Figuring out the heuristic for deciding which two areas you should merge at every stage; it basically controls the entire behavior of the algorithm. You have to spend a lot of time tweaking this. We didn't take any information into account, like line of sight, path finding within areas, and all that. It's just really simple, number of waypoints and the actual surface of an area, we take that into account with a squared factor because we really want to punish big areas. We say: if you have a big area, then it's a bad merge. Taking into account the number of links between the waypoints.
Now this is the key ingredient, the first two are low level factors and this one's a high level one. If you look at the big picture when you're doing your merge, if you merge two areas and that causes your high level graph to have many connections, then that's a bad thing, because your path finding will take much more time when you have more connections. You have to be aware of the high level factors and the low level factors here. I put them together on this slide, and taking into to account those two things at the same time gives you good quality strategic graphs.
On top of that, there's some runtime information. These graphs are useful on their own for defining areas, and that helps interpret the markers, but we use a form of influence mapping to make more sense of that information at run time. The set of floating point numbers are overlaid onto the graph, so in each area we can tell exactly if it's under enemy control or under friendly control. It's the standard thing they use in RTS games. This should be nothing new to you.
The key thing about the influence map is that it's always a compromise between making sure that the information is high level and strategic (I guess) as well as making it up to date. We don't want it to oscillate and be changing every second, someone moves in and out of that area. But at the same time we want it to not have too much lag and so, finding that is a tradeoff between those two factors.
The influence map is actually calculated based on all the bots, and all the turrets and deaths. If an enemy dies, than that's going to add positive influence for your side and vice versa. These values are accumulated into the graph and smoothed, and then we weigh them in, and we do this on a regular basis, it depends on how many bots there are in the game. In small games it's done more often, in bigger games it's done a bit less often, to spread out the calculations.
This is an example of the influence map, in this big level. This is one of the home bases for the Helghast faction, it's like the bright color, that's a strong influence for one faction. At the other side we have a strong influence here, probably because there's a bot or two here, or probably a turret and the same here, there might be a couple bots here. That light color is a strong influence for the other faction.
The influence map is used for selecting things like regroup locations. When I'm attacking a point, the squads will regroup in a temporary location before attacking, so you can use the influence map to decide if it's a good place to regroup or not. That's done really easily again — first there's the idea that you use some kind of "taboo search" as the academics call it, you don't use a previous solution, and you avoid the same location as the other squads. Then you use the influence map to filter, to select and rank the location. Each squad will be able to pick a regroup location for attacking based on where it's going and based on the influence map. That gives you kind of a flanking behavior — if the enemy's well fortified at the front, and there's a way to attack by the side, then the squad will go by the side.
Using the Influence Map
Here's an example for assassination, this is the assassination target, this is me. This area's well defended, so it's got high influence for this particular faction. There's a couple doors on the left here, and a couple doors at the bottom, so that gives the attackers quite a few opportunities for getting into this amphitheater, which looks surprisingly similar to this one here. Watch out! Through the left door, this is the first wave, these guys are attacking. You can see they're dragging influence with them as they move along there, turning this area blue. They attack through here, but of course, as I was there, these guys didn't last for very long. That whole area turns red when they die, and the next wave realizes, "Well this area's kind of grayish, it's not red so nobody died here, so it a better approach" because of the influence map they'll attack via this direction. As it turns out, it's not entirely done with the influence map on its own, there's some strategic path finding going on as well.
The reason for the path finding is so that it makes sense of the influence map. When you're making decisions where to move in space, you need to plan paths within the influence map space. It's combining all this information, taking into account the influence over a certain distance. This is done with a "single source path finder" — or so it's called, technically. It calculates all the distances to a point in space. What we do, we can't afford to run all these calculations whenever we need to just compare two points, so we need to cache these values, using one floating point number and an index in each area for each of the pathfinders. This means we can look up any distance to the source point, so we'll pick a good source for the path finder based on where the squad is. Then we can look up how long it would take and how much risk it would be for the squad to reach any particular location in the map.
This is done by giving each squad its own strategic path finder. In the same that each individual has the ability to plan paths, the squad uses this for both reasoning and for helping guide the bots low level. The high level path finder is figuring out the paths in the strategic graph, and then it's handing over the areas to the bots, and the bots will just be contained within these areas. What the high level is doing is encouraging the bots to go into locations which make sense strategically.
This is the influence map again, just to give you an idea of how the costs are set within the strategic path finder. If I'm in the blue faction, then all this area here is going to be super cheap — less than 1 — so it's really almost free to cross these areas. For enemy areas, the cost increases with the influence, so it becomes a squared function, it's like climbing a really steep mountain. We discourage that! In fact, we had to tweak the weights quite a lot to avoid certain bots from preferring to run through the enemy base with all its turrets, because some bots originally were very much afraid of death [from other bots]. They would run to their death and get mowed down by the turrets instead. Tweaking the weights is very important to get the correct path finding behavior to go via sensible locations.
This is a quick example of the strategic path finder; we're going to zoom on this in a bit. The key idea here is that the squad is currently targeting this position here, so that is the center of this particular path finder. What this means is that we can drop a bot anywhere in the level and we can look up exactly how to get to this point here. If we zoom in on this area here, you'll see all the arrows that are pointing in this direction. It will send me over here for instance. All of these arrows can be followed to go wherever you're supposed to go in a strategic way. This can be done with a look up and it doesn't take any particular time, it just takes up the time to calculate the value in the first place, and it's not done again.
The reason for using this kind of path finder is that, for instance, if the squad was regrouping here, we can put the path finder there, and for all the paths that we need to basically go from one point to another, we can just say: "Hey, how do I get to the regroup point?" "How do I get back out again?" and so instead of having to solve this problem for all possible pairs of paths, we're just picking a few key locations, and we're focusing all the searches through those locations. That means that we can plan paths for all the bots within one squad much more effectively, and take into account all the changing weight, and that is cached for us.
Incremental Single-Source Algorithm
In practice, this is done with an incremental algorithm. Because the problem is, when you have the low level bots doing the HTN planning at 500 times/sec, there's not that many resources left for high level reasoning. So this is done in an incremental fashion. The idea is that you take the current distance estimate, and you update it on a regular basis, trying to improve the estimate. With an incremental algorithm you can spend more or less time calculating or updating these distance values. That means it can scale up and down, and yeah, it doesn't scale down to zero but it can scale quite far and still work. It's a trade-off of course and I'll show you a screen shot at the end. The algorithm itself is a combination of a couple things. In fact I worked on this before, and this is why I did it, I might not have done it otherwise. I did my master's thesis based around this idea, so it seemed to be the perfect fit. Whereas before I was applying this kind of stuff to individual, it didn't make sense, but for a cached high level graph that's searched at run time with constantly updating values, it was the perfect fit. It uses a bunch of trick to deal with dynamic changes in the environment — that's in my master's thesis.
Here's a quick example of how it works for the squad corridors; each squad is processed in order, and because of that, we know which squads have gone first and we know what information the first squad has used. If we have the center of gravity of one of the squads here, and it's going here, we know what its main corridor is by doing a quick look up at that path to the center regroup location. Based on that, we can then, for the next squads, set a higher cost, at least not a high cost compared to an enemy cost; it's just not quite as cheap as being in another area just around that corridor. The idea is that we're encouraging the squads to use different areas of the map to encourage diversity.
Here's a path finder. This is particular squad is regrouping here. Its goal is to defend this particular "Search & Destroy" objective. All the path finding of this particular squad goes via this point here, but at the same time it's also reserving its corridor. As that squad moves out of the base, the top right is the base, and that squad is going to use this area, at this set of areas to get to this particular location to defend that point here. This is the first squad processed, and it's going to reserve that as a corridor, and the other squads are going to avoid that. This particular squad has decided to regroup here, and it's got the same goal — it's got to defend this area here — and it's coming out of the base up here, and this squad takes a different path through the base — it takes this road here — and it will move into this position.
That squad took a different path and the final squad also takes a different path, but it's also going to a different location so that creates even more variety. This is a sniper squad, and the sniper squad basically has a different set of markers, and so the sniper will pick the particular location on top of a building to defend that area, because it makes more sense for the sniper to use that location. The whole process is what I described earlier of picking the best places to regroup. That's the path that the sniper uses, and that's another different path.
What you can see here is that there are certain black areas so that the path finder hasn't calculated them yet. That's the down side of using an incremental algorithm. Those values are just not available. You can't have those valued available without having put in the computation power. There's always the individual AI to fall back onto, and this could be solved by dedicating more resources to the problem; it's always a trade-off. But in this case, it worked out well.
The take away for my part is that I hope you didn't learn anything new today. *laughter* All these concepts have been around in the game industry for decades or maybe even longer, and there's not even anything particularly revolutionary about any of the components. As a matter of fact, I don't think that any of these components are particularly difficult. There's a couple of things that I haven't talked about — the idea of how you pick a place to spawn, and you can see how that would work based on the influence map or based on the pathfinder. But what you get out of it as a strategy, is much more than what was put into it. So all these different components are coming together for the first time in a way in a first person shooter, well, at least that's the impression that we have of it — in bridging the gap between real time strategy games and FPS.
That's my part of the talk, and Remco is going to take over with the final words for this particular discussion.
Remco Thank you. After we finished this project, of course we still have a lot of open things, things that we wanted to try half way through, or ideas that we got at the end. I just want to highlight some of the ones that we think ourselves are most interesting, mainly for our own nerdscore, and not necessarily for commercial projects.
The first one we thought of was, Alex mentioned that we annotate the moment by hand, the various levels to find, for instance, the sniper positions After the project finished and the game has now been out for a couple of months, we now have about 1.2 Gb of round data available. We can see who shot who, from where, in what level, in what game mode, for all the games that have been played so far on line. One of the things that we've been thinking about is using that data to do some data mining to find sniper positions. This is of course very interesting, but commercially maybe not the smartest thing to do. But it does show, especially for multi player games, there is a good potential to actually do something outside of marketing statistics with the data that you gather. Similarly you can use it with the data that you get once the game has been released, but you could also so it with the data that you gather during public betas or in-house trial play testing. That's one idea.
AI & Automated Testing
The other thing we sort of discovered by accident, is that there's actually a good practice when you make a mock-up version of your level, to also already prepare for bots. Let the level designer annotate it, put waypoints in there, because a lot of times, it's easier to run a couple of bot games and observe if there are strange things going on. You could for instance find out that the level is completely lopsided and one faction will always win one specific game mode, because the distances are just not correct between the two home bases. We discovered a couple of those cases while we were debugging the bots, and now for every multiplayer level that we do to, we're planning to actually include the bots a bit earlier during the game play testing phase, so we can discover these types of errors.
Bots for Teaching
The other thing that we thought was kind interesting, because of the whole architecture; we could now expand on the teaching roles of the bots. If you remember all the way at the beginning, we said that one of the main reasons that we have bots in the game is to help people ease into the game with all the different game modes, all the abilities that you have, etc. One of the things that this architecture gives is us nice handles to actually help players actively, so now we do passively by just playing nice a little bit, and having players observe what the bots are doing. What you could also do is, place the player in a bot controlled squad, and have the squad leader hand out commands, not in the sense of orders, because the player doesn't take orders, but in a sense of chatter, so something that looks like the microphone chatter like "Oh, we're going to defend this area, now we are here, you are an engineer, place down a turret." This is really simple if you see how the whole thing was set up to control the bots, it would be very trivial to play samples instead of handing out orders to AI control players. It might be a good way to train people how to play the game, in the actual game. Instead of like "Oh there's this level, now press jump to jump over the first thing you see, etc?" it'll all be in context. People can ignore it, but if they follow it, it will make them better players. That's another thing that we thought and that we might actually try to do.
Taking into Account Players
The other thing that is missing from the whole AI part at the moment is that, even though we respond to the things enemies are doing, we're completely ignoring what human friendly players are doing. So, 4 bots on one side will play exactly the same if there are also 4 human players also on the same side. We don't take into account, for instance, in "Capture & Hold" if there's friendly players advancing on an enemy area to help them or just to attack the enemy area, we don't do anything with that. Depending on how often that actually is conceived the problem we may also want to do something there. Those are the main things that we are hoping to spend some time on for the next iteration.
The Importance of Squads
Question Why is the squad level is so important for you? You can have some kind of pull-off objectives, and people are assigned to objectives, and so objectives require more people than one. But why is the squad level so important?
Remco The objectives, it's a matter of decoupling stuff once again. The strategy hands out a number of objectives and there could be even more than you could make squads. The squads actually implement behavior needed to achieve those objectives. It's mainly a matter of splitting those two things. You could make a strategy that just spits out objectives and on the other end engineer the squad and the squad behaviors to achieve them. Another historic reason is that already had the squad concept for single player, where the scripters from a Lua script can order the squads around, and of course, there is not a high level concept of objectives. So yeah, if you consider it like that it's just the way we glued these two concepts together.
Alex There's one good reason, and that's more than software architecture, more gameplay reasons, is that the snipers have their own squads. The sniper is a special case squad, so they achieve the objectives in a different way, and it gives you that decoupling if as Remco said, you're doing stuff slightly differently. The way it works, in this case, is that each sniper has its own squad. Each sniper is doing strategic path finding because the sniper's weakness is often finding wrong tactical path and he doesn't have a way to deal with enemies [nearby]. The sniper is a special case that has a slightly different piece of logic running. At least for me, in the future, I would move even more code into the squads and make them even bigger, and put even less responsibility within the strategy — so moving stuff that was C++ into the [planner] domains, and making the squads generally more important. And, I guess, it ties into another thing, which I think is future work, is the idea of feeding all this strategic information into the squad as well. We've got all these influence maps on all this information that's currently only used a couple of times by the squads, but basically just rebuilding the squads up with all this information, taking all that into account means that you can make the strategy layer even thinner which is, probably a good thing.
Question I'd just like to know how many people and how much time did it take to manage to obtain this result. How much time did it take to do all the work on the bots?
Remco What actually happened is that, at the beginning of the project there's a humongous list of features that needed to be implemented, and one of them is the bots and like I said last time, the AI programmers just decided to implement the bots in the last two weeks themselves because they like their feature. The feature was not very high on the list, and actually I was given no resource for it, so I thought, "Well let's come up with a plan, and get an intern from university", which was Tim. He then made a part of the whole approach for "Capture & Hold" and we showed it to the game designers and they really liked it, so then we got some resource, and I hired Alex to finish it. I couldn't convince them at first, but like a lot of times, I just made sure that there was something that they could actually see and see how incredibly cool it was and then, we got it around management. It was difficult.
Alex I was just going to say, I think it was a perfect environment for that kind of prototyping, having first coming in as a project which you take to the next level, and then you just bringing in someone that's not as necessarily as closely attached to the whole team, maybe doing you know, a few features, saying "Hey, this sounds like a good idea, let's just try it" having that progress at every stage makes it possible... I mean all of this stuff, I took time but, no particular step was you know, revolutionary, it didn't take us years to build, there was always visual progress and that was the most important thing. Because that worked well, I just did more and more until it was time to stop.
Debugging Multi-Layered AI
Question I very much like the idea of multilevel AI; however usually in a multilevel organization it's very difficult to describe blame when the global behavior is not satisfactory. So did you experience this problem that something was not right, and you didn't know if it was because of the tactical or because of the strategic AI?
Alex I think, yeah it's easy to find problems when you have debug views, obviously as I mentioned this morning, you have lots of debug renderings, so these kinds of problems are relatively, I mean it takes time to find them, but I guess you could talk about the debugging process that's just assigning blame to bugs is part of the job I guess. Saying what component caused this particular bug or the system to fail.
Remco What you mean is assigning blame during debugging, that is always a problem when there are a lot of different systems all working together for a specific behavior. In this case the good thing is that there's a very clearly defined interfaces between the different parts. In order to understand what an individual guy needs to be doing, you just need to look at its commands and its own world state data base and whatever type of thing it is, to explain if that thing is working correctly or not. If you want to understand if that is correct, then you can move up to the squad and see if it's doing what it should, being given that it has order from the higher level and its own behavior, so in that way, actually the way that they have split up and that they have quite well defined interfaces makes that simpler. Even though, in theory, this could have been a problem, actually it was never more of a problem than fixing any of the normal AI bugs you encounter.
Question So it's because you had a very precise specification of the inter-layers between the different...
Remco I wouldn't go as far as to say that we have a very precise specification of the whole system... *laughter* Because it's being recorded. But it is the case that we know that if you look at the orders, plus the planner, then you can predict what the individual behavior would be if that goes wrong, the error is in the individual, otherwise it's probably somewhere higher up in the tree. So that made it quite easy to find out where things were going wrong. Does that answer your question?
Attendee Very well, thank you!
Question You mentioned the 500 plans/sec, and I just wondered if you could give some rough performance numbers. And for that matter, how long did the preprocessing take for your coming up with the areas and so on.
Remco Do you mean performance statistics for the runtime part or for the preprocessing part?
Attendee Preferably both but especially the run time.
Remco The runtime thing well, the game runs at 30 frames per seconds. The update of the game logic is done in one of the 30 frames that the AI has a part of the other frame that just alternates. So we have about a quarter of the available of the PPU budget. We managed to do 15 bots, 10 turrets and 6 Sentry zones all of which are planning, and at the same time, how many squads were there typically be in? 6 to 8 squads, and the strategy update for both sides, we managed to that within that budget. That is the runtime part, and the offline part, Alex knows best.
Attendee Sorry are you saying 25% of the CPU for those frames when the AI was running for each?
Remco No 50% of the PPU, so on the PS3 the PPU is just one of the processors, there we use 25% of the combined two frames, so 50% of the frame where it's run. But that also includes not just the planning, it also includes stuff like the perception, and it includes things like actual logic to perform this specific task, so those type of things are also in that budget.
Alex For the offline part, is about 10-20 seconds. But that's not optimized, I know there's a couple loops in there that are squared [quadratic] or cubed [cubic] even, it wasn't a bottle neck and there were other calculation done at export time. I mean there's room for improvement there; it could be dropped down to a couple of seconds.
Alex Maybe one last question, because we have to wrap up... Make it a good one! *laughter*
Single Player Differences
So how much different was the multiplayer bots from the single player AI?
Remco On a general level, I just described it already, the overall architecture for single player is more or less the same, only the top part that is the strategy part that knows is its own AI module, in that case becomes a LUA script, so the script is then just hand out orders, not the objective parts, but directly orders to the various squads or individuals when they see that. That part is different. Then for the, I think the squad AI is, I think almost entirely the same right? Between multiplayer and? yeah there are some little differences but it's mainly the same domain, and the multiplayer bot domain, apart from the badges which only exist in multiplayer, the core of it is actually mainly the same, and there are some specific behaviors that only exist in single player also, but I think the overall, the way it's at the higher level is decomposed, is the same, and also, a lot of the behaviors so — "throw grenades, doing this and this and that" "fire this, this and this" "I use machines guns under these conditions?" these separate behaviors they are I think almost completely the same, Tim know a bit better, but yeah, he's nodding, so I guess yes. That part is very much shared.
Alex Well, that's it I'm afraid. If you have any more questions, I'm sure Remco and Tim will be more than happy to answer your questions, but thanks a lot for your time!
Thanks to improvements in PC & console hardware, there's now an opportunity for game developers to leverage AI technology from different genres. In studios that reuse their AI codebase from one generation to another, or those with large enough AI teams, it's now possible for a single title to bridge the gap between first-person shooters (FPS) and real-time strategy (RTS) technology — and Killzone 2 is a perfect example.
This is part 2 of the presentation about KILLZONE 2's multiplayer bots from the Paris Game AI Conference 2009. You'll see how the high-level strategy was implemented for multiple game modes including Search & Retrieve, Assassination, Capture & Hold, and Search & Destroy. You'll also learn how terrain areas can be created automatically using an area clustering algorithm, and form the basis of the tactical pathfinding used by squads and the high-level strategic reasoning.
About the Team
Alex J. Champandard (main presenter) was contracted in the production phase on the KILLZONE 2 multiplayer strategy, which he worked on for the best part of 2008. Alex also implemented the badge abilities and multiplayer specific behaviors.
Remco Straatman (conclusion) is the Lead AI Programmer at Guerrilla Games, and was in charge of both the single player and multiplayer AI on KILLZONE 2. Remco's also responsible for the development of the AI technology over the years. He presents the last few slides in the recording below.
Tim Verweij (off screen) established the overall architecture of the multiplayer bots as part of his Masters Thesis. Tim also implemented the Capture and Hold strategy, and worked on the HTN domain for the bot's behavior.
Here's a specialized format for you to download and play offline via a portable player:
From Squad Tactics to Real-time Strategy: High-Level Multiplayer Bot AI in KILLZONE — Video Alex J. Champandard, Remco Straatman Download MOV (QuickTime)
The MP3 file below is better quality than the streaming video above, and is a perfect candidate for listening to via a portable player (96 Kbps). The OGG file is the highest quality of all (128 KBps). You can download them both here:
The slides used during the presentation are available here:
KILLZONE 2's Multiplayer Bots — Slides Alex J. Champandard, Remco Straatman, Tim Verweij Download PPT or PDF
If you have any questions feel free to post them below, or in the forum thread associated with this post!