Over the past month, I’ve received quite a few questions about the stuff I’m working on — and some of you even seem genuinely interested! So I figured I’d take the opportunity to write a post about my master plan for 2008. It’s taken me until now to write this since there was a big unknown in the equation, which has only just been resolved…
Starting Monday, I’ll be contracting on Killzone 2 at Guerrilla for a short while. I’m sure you understand I can’t go into any details about that — but you’ll notice a few changes around here. Read on to find out how this will affect the blog and the site, plus what’s in store for the rest of the year.
Killzone 2’s AI
If you came here looking for gossip or juicy insider stories, I’m afraid you’re not going to find much! I can’t tell you anything that’s not already been made public, nor can I say what I’ll be working on or how long it’s going to last. However, I can use this as an argument to convince you that I don’t pull articles out of my posterior, but I’m genuinely passionate about this topic instead… (Either way, subscribe and find out!)
Anyway, on some social sites, my profile says:
“I’m primarily the editor at AiGameDev.com, but I occasionally do some contracting when a cool AI game development project comes up!”
Well, this is one of those cool projects: the kind that most AI enthusiast (myself included) ponder about during daydreams. It’s definitely going to be both fun and challenging at the same time. In an ideal world, I’d probably even do it for free — but of course I didn’t tell Guerrilla that…
The AI team here in Amsterdam is pretty amazing, and I’m particularly looking forward to working with them. Some of you may know Remco Straatman or Arjen Beij from GDC or even the AI Wisdom books.
Screenshot 1: Official screenshot of Killzone 2.
What’s Going to Happen at AiGameDev.com?
The site has been a primary focus for me over the last 6 months, and it will remain so for the foreseeable future. But more importantly, the site become an integral part of the game AI community as a whole — reaching over 1,500 subscribers and about the same visitors daily. It’s not going anywhere!
Basically the posting frequency will change a little in the short term, but the quality of the articles should improve if anything! The combined experience and IQ of the masterminds at Guerrilla should trickle through from discussions over lunch. Expect more insights into life in industry generally, as well as thoughts about other games.
I’ll keep updating the site multiple times a week, but I’ll no longer be able to commit to a post a day. In particular:
Tuesday’s discussion and Saturday’s links will keep going on a weekly basis. I’m now hiring someone to help with the writing on those days since they don’t require as much technical knowledge. (Email me for more details.)
Sunday’s post will disappear. Essays and editorials will move to Fridays along with reader questions, which should remain a weekly feature. I’ll use the weekend to prepare articles in advance…
Monday’s game reviews, the Wednesday tutorials, and Thursday theory will probably run every two weeks rather than weekly — depending on my workload.
Also, to help diversify the opinion on this blog, if you have a little experience writing then feel free to contact me about guest articles. The ones that have run so far Justin’s and Gabriel’s have been very well received! I might revisit and improve early articles on the blog too, in the hope of bringing the new readers among you up to speed.
Screenshot 2: In-game rendering of Guerrilla’s upcoming PS3 shooter.
Over the past year, I’ve been experimenting with various prototypes for an extensible decision making and control system. The C++ implementation is called Game::AI++. Right now, it’s still more of a research project than anything else, in particular aiming to:
Unify planning and execution into one system,
bridge the gap between designer control and autonomous AI,
and support concurrency ubiquitously in the behaviors.
You can read more about the technical design as well as the motivation behind the project. I’ve been releasing versions incrementally over the past few months to people who are interested. The latest version is in the forums; just sign-up and introduce yourself to get access to the code.
The design is based on things I wished I had at Rockstar, but having the dog simulation tutorial as a grounding point for all these ideas is very helpful. Technically, the code has matured a fair bit over the past few months, although it’s still not very easy to use (to put it mildly).
It already supports most of the features discussed in my lecture about behavior trees (part 1, part 2, part 3), including sequences and backtracking search, parallel execution, filter and decorator behaviors, actions and conditions. The execution of the behavior tree is managed centrally, as well as the memory in a dynamic blackboard.
The major features left to implement, in my opinion, are the following:
Stack Mechanism — Behavior trees are very closely related structurally to stack languages, so adding a stack is the easiest way to allow different behaviors to communicate. It should also make it possible to use the behavior trees for lower-level calculations too (e.g. target selection, cover evaluation).
Goal Architecture — This starts with a simple lookup table for each character to separate goals from the behaviors that achieve them. On top of that, it’ll be easy to implement a dynamic queue that can take goals as orders and looks them up in the table at runtime.
Advanced Concurrency — There’s already support for synchronous concurrency using parallel nodes in the behavior tree, which handles forks and joins in a very structured way. What’s missing is support for free-form concurrency, where arbitrary goals, reactions and behaviors can run in parallel and organize themselves. This will involve resource allocators as decorators in the behavior tree, and possibly a prioritizing mechanism.
The idea of tree transformations is a powerful concept, and not always obvious what the best patterns are for getting the most out of it. That said, I’ve become more comfortable with it and there are many applications…
Photo 3: A flower shop near Guerrilla’s offices.
Is There a Holy-Grail?
Probably not! But what I want to research over this next year, is the following premise of mine:
“If you have an efficient decision making and control system capable of incremental planning, opportunistic replanning and that’s easily edited by designers, then why not apply that to other aspects of a game than just high-level AI?”
I’m assuming that this approach should benefit game developers in many ways since it:
Requires less code to implement many different subsystems,
Improves the quality of the decision making by widening its scope,
Helps designers in other parts of the game by giving them better tools.
In particular, there are three areas relating to game AI that I would like to apply this technology to.
Hierarchical Planning for Paths
With the huge size of the worlds in games these days, hierarchical pathfinding has become the de facto standard in industry. There are even middleware implementations that don’t rely on A* or any other heuristic search anymore; given a good level representation, all you need is a hierarchy of compact lookup tables and some reactive locomotion.
The benefit of integrating the pathfinding with the main AI reasoning, in theory, should help:
Reduce path calculations by using local decisions to decide what kind of movement is necessary.
Make the most of incremental planning to reduce the need for full path computations.
Integrate the behaviors better while waiting for the results of the pathfinder, getting the characters moving much quicker.
Also, I’m curious to see how far one can get without using any kind of heuristic planning like A* — just the hierarchy and a best-first search. It will require good domain knowledge, but this can probably be automatically generated. This topic was also brought up in this thread of forums by Phil (registration and introduction required).
Photo 4: Ghost tram in Amsterdam
Improving Sensory System
The first prototype of an AI typically includes all sensory code directly within the decision logic. It’s very explicit, you only request what you need, and you’re sure to get what you want. Then you add more characters, you realize that the AI is taking over half of the computation budget, so you have to make changes.
That’s where a separate sensory system comes in! It calculates all the data you need in batches, caches it, and dispatches information to the different characters. However, that introduces new problems into your AI. Often, it’s:
Clunky — If the decision making system needs more data, and you want it to be efficient, you need to support it in the sensory system.
Wasteful — Conversely, if certain of your actors don’t need certain information from the game, then the sensory system calculated data for nothing.
I’d like to be able to build the decision logic directly assuming I can get all the data I want efficiently, but dealing with potential failures in each case. Then, let an automatic sensory system work out which requests to batch up, how to prioritize the requests, etc.
“A system capable of intelligently managing computation is an important part of next-gen AI.”
Essentially, while planning, the AI would scan different possible paths through the behavior tree. Each time the planner hits a sensory condition or query, it would pause that coroutine until the data is available. The underlying interpreter could then batch up each of these queries by running them together based on what data they need by default.
This idea can be similarly applied to any kind of computation, including path- or cover-finding for that matter. Having a system that’s capable of intelligently managing computation (and not only behaviors) is an increasingly important part of the puzzle with next-gen systems.
A final topic I’d like to research is being able to strike any balance between realistic animation and ultra-responsive behaviors. Too often, one is sacrificed for the other because there’s no way to reach a compromise without a complete rewrite of the system. I’d like to be able to tune a slider that specifies how much error can be introduced into the motion capture data.
To do this, a goal-directed AI really helps. But it’s important to:
Let the animation know about the value of higher-level AI goals so it can generate suitable motions.
Inform the AI about the different possible animation options (e.g. their cost) so it can filter out what doesn’t work.
It’s two sides of the same coin really, but it shows how integrating the two systems could really help again. This is a subject I discussed in more depth in my analysis of the animation technology in Crysis.
Photo 5: A canal in Amsterdam.
That’s Almost It…
Most of the research topics I just mentioned aren’t too labor intensive. They’ll require a few prototypes, lots of thought and maybe a little time writing up a paper — but not much more coding!
So, while that’s happening, I hope to cater a little more to the beginners among you. If you’re only just getting started, I’m impressed you read this far! Here are things you can look forward too:
More content and an interactive sandbox to help you learn game AI in practice.
A set of tools to help you create behaviors without having to do too much scripting.
Anyway, I’ll fill you in on the details once there’s something a little more concrete to discuss! It should be a great year — even if I can only do half of all this stuff…
Comments and questions welcome of course! Post below or in the forums.