Back when I was young, when games were played with paper and pencil, a “die roll” actually involved rolling a die, a cut scene was something spoken by a DM, and in-game graphics fidelity was limited only by your imagination, the original and pleasantly concise book AD&D concept of the “Magic User” (with caps) was “a dude who could use magic.” Since those days the concept of a “magic user” (no caps) has been fractured into the specialties of abjuration, conjuration, divination, enchantment, evocation, illusion, necromancy, and transmutation. You can’t just be a “Magic User” anymore — at least not without severely limiting yourself.
In recent years, the field of game AI has become increasingly more sub-specialized. As I participated in formal or casual group conversations at the recent GDC, I found myself quietly making notes of how many times two different game AI programmers could have very different areas of expertise. Sometimes it got to the point of not even knowing each others’ terminology. And that was becoming obvious in the gatherings at GDC. If the conversations were an indication, it seems that you can’t just be an “AI Programmer” anymore — at least not without severely limiting yourself.
Differences and Similarities
While there were common threads amongst these people, there were also plenty of occurrences when there would be awkward exchanges between them. For example, a specialist in animation AI (necromancy?), someone who was using search algorithms for planning (divination?), and a behavioral AI designer (enchantment?) may have large areas that were not covered in the Venn diagram of their knowledge bases and associated job responsibilities. A comment that would be heard often enough in these conversations would be “Oh, I haven’t gotten into that kind of AI.”
Not to say that we all weren’t curious about each other’s specialties. We were also somewhat evangelistic about our respective areas of interest or expertise. When speaking about my own work, I often found myself asking some form of “But why wouldn’t you want to add this to your AI?” Thankfully, for the most part, this question was well received by the other AI professionals I spoke with. Of course, I found myself asking the same question about what they spoke of. “Why wouldn’t I want to add [their nifty technique] into my own AI?” After all, that is what GDC is all about… not just learning about those nifty techniques but pondering the possibilities of how they can be used in our own projects.
The AI Elephant in the Room
Unfortunately, another comment that occasionally bubbled to the surface to burst with an uncomfortably sulfuric aire was something to the effect of “I just don’t see why [that concept] would be a big deal.” I paraphrase this, of course, for the sake of genericy and simplicity. Suffice to say, however, comments such as these evoked a palette ranging from uncomfortable shuffles to incredulous looks. Manners aside, on rare occasions I agreed with the speaker. However, more often than not I could not help but be a little baffled as to why this person would think that a particular idea was not useful. This was especially evident in the cases where the aversion seemed to be born out of either fear or misunderstanding of the technique itself. I wondered if this mentality was more pervasive “out there” in the industry. Perhaps this narrow-mindedness, regardless of it’s reasons, is what is holding the sub-industry of game AI back from achieving our ostensible collective goal of more realistic behavior. (Although the premise that we are trying to achieve “realistic behavior” is, itself, certainly subject to debate.)
While there were plenty of little things that I shook my head in consternation over, there was one theme that cut me deep. Why aren’t people more interested in using “simulation” techniques in the AI of individual characters? It seems to me that the concepts that make up — or at least underlie — simulation would be the spells that we could all cast. Everything we as AI programmers do should be based on the idea of simulating something.
The conversation at the 2008 AI Programmers Dinner was more than a meeting of the minds. Often, it was a meeting of differing disciplines entirely.
For example, if you are tasked with modeling an NPC, certainly you would have to take into account the pathfinding, steering, perception, interaction with objects, reaction, and the various animations that make all of those visible to the player. Admittedly, all of the above can get exceedingly complicated these days. In the most recent (4th) book in the AI Game Programming Wisdom series alone there are 12 separate entries just on improving pathfinding — including one by Chris Jurney dedicated entirely to turning vehicles. These advances that cut through complex issues are all well and good, but what about simulating the decisions that either result from those aspects (like perception and reaction) or drive them (like the interaction and animation)? We spend a lot of time modeling what the NPC can do and how he/she/it does it. But what about the why? Put another way, we model what their bodies do, but what about what their brains are doing?
There are strides being made with this at various levels of complexity. It started with random “die rolls” to determine state transitions and most recently has advanced with a sojourn into various forms of planning algorithms. Still, it is amazing to talk to AI programmers that say “Well, I haven’t gotten into that sort of thing; I don’t do simulations.”
“I haven’t gotten into that sort of thing; I don’t do simulations.”
This statement represents a great curiosity to me. Are these sorts of advanced, often multi-layered, calculations and algorithms only relegated to the Starcrafts and Civilizations of the game world? If there is a resistance to using complex mathematical bases to determine the behavior of individual NPCs, from whence does that come? It used to be a lack of processing power but Dr. Moore is coming to the rescue on that front in grand exponential fashion. (Although Ray Kurzweil may argue that it is logarithmic instead.) Another common answer is that “our NPCs don’t live long enough to exhibit those behaviors”. But is that an excuse for not developing deeper behaviors or rather a result of the fact that we haven’t? Perhaps the very fact that our NPCs are dying so quickly is an effect of the limited amount of behavioral depth we are putting into them? To be argumentative, one could say that the NPCs don’t live long enough to show off all those pretty animations either.
Is Simulation the Common Thread?
So where are we? As much as the knowledge-hungry part of me would like to argue otherwise, I have to concede that we can’t do everything in AI anymore. The field has gotten too big and too specialized. While we may all be “magic users”, we have our own individual disciplines and even our own languages to some extent. But are those programmers who choose to venture off into the more esoteric pursuits sometimes doing it at the expense of what AI truly needs as its bedrock? Should even the most basic decision simulation algorithms be the among the cantrips that we all draw from regardless of our AI speciality? Do we owe it to our craft to remember that we are breathing life into our creations — not just so that they are animated zombies without a soul… but thinking entities?