Open Coverage

AI Summit Slides, Notes, Highlights and Photos

Alex J. Champandard on April 9, 2009

The first two days of GDC 2009 featured the inaugural AI Summit, packed with presentations and panels from leading game developers worldwide. The event was a great success, bringing together a mix of AI programmers for a third of the audience, and rest consisted of students, enthusiasts, different types of developers, volunteers, media and other speakers.

For many AI developers that attended, these back-to-back sessions completely reset their value system; the rest of GDC seemed much less intense in comparison! More generally, based on feedback so far, the AI Summit felt like a standalone mini-conference before the actual GDC easily stood its own ground compared to the last three days.

Of course, was there to cover and participate in the event! I (Alex Champandard) was involved in three sessions of the Summit, and my wife (Petra, Mrs. was in charge of the media coverage. Below you’ll find the best photos, the official slides & descriptions, as well as my session highlights and notes.

Table of Content

  1. #define GAME_AI

  2. What Should I know As a Pro Game AI Developer?

  3. Integrating AI and Animation

  4. 2008 AI Postmortems

  5. AI and Designers: Mind the Gap

  6. Toward Solving Pathfinding

  7. Next Steps Towards Human AI

  8. Modeling Individual Personality and Emotion in Characters

  9. Tools and Techniques for Testing and Debugging AI

  10. AI Architecture and Design Patterns

  11. The Photoshop of AI

  12. An Introduction to Knowledge Representation

  13. Parallelism and Multi-threading in AI

#define GAME_AI

People: Steve Rabin


This introductory presentation sets the stage for the AI Summit by discussing what we mean by game AI, along with some highlights of game AI over the years. We’ll look at where it’s all headed, what challenges we should be working on, and how the presenters of the AI Summit will help inspire and challenge us all to take game AI to the next level.

Photo: Steve Rabin


Steve’s talk was split into three parts: who he is / what he’s done, an analysis of the common definitions of game AI, and a historical review of games and their AI innovations.

In part 2, Steve started by talking about the different pieces of code that build up game AI: if statements, random numbers, scripts and he wonders if they can be considered AI. It’s an interesting question, although Steve didn’t give a very balanced impression of scripts when he described them I’d amend that by saying that scripts & fixed sequences are irreplaceable in games and AI generally — just like any other logical building block.

Steve then provided two amended versions of the Wikipedia definition of Game AI, the first relates to characters:

“Game AI is any technique that contributes to the perceived intelligence of an entity, regardless of what’s under the hood.”

The second definition covers the use of AI during development or help create content:

“Techniques that generate artifacts that would normally be created by the game developers (like environment/layout, story, music, or models/animations/textures).”

Steve notes that AI touches on and overlaps with everything, a trend that we emphasized in the debugging session on Tuesday. He also listed the requirements of AI:

  1. AI must be intelligent, yet purposefully flawed.

  2. AI must have no unintended weaknesses.

  3. AI must perform within constraints.

  4. AI must be configurable by game designers or players.

  5. AI must not keep the game from shipping.

Steve listed some quick insights from general experience: AI implementation impacts game design, AI creates opportunities for the player, the player must learn to manipulate the AI & not just defeat it, the relationship between designer / AI programmer is particularly important, and working closely is important to make the AI shine. In terms of technology, he says game AI requires many different representations (a la Minsky) and that every game requires different AI.

The next topic Steve addressed is middleware, though this is a bit controversial to me. Steve argues that AI middleware not effective because of specialization: “Baseball AI different than football.” In my opinion, there are a lot of common requirements and algorithms that can be reused and provided by middleware. Middleware is starting to become more effective these days because it can extract these common patterns.

In part part 3, Steve listed games and their innovations in AI. In the middle, Steve pointed out “Unfortunately, not many successes of neural networks.” This was also a bit controversial for me, it’s for a very good reason they aren’t successful. (This is discussed later.) The history of games is:

  • Pacman: Implicit cooperation, no randomness.

  • Simcity: Cellural automata, influence map.

  • Dogz, Tamagochi: Virtual pets.
  • Madden 1998: “Liquid AI” is the stuff that ran down EA’s leg when they saw GameDay.

  • Thief: Sensory system.

  • Half-Life: integration of the AI into the story.

  • Sims: smart terrain and smart objects.

  • Black and White: Perceptrons, Decision Tree learning, Imitation Learning, Gesture Recognition.

  • EyeToy: Machine Vision.

  • Fable: Player Reputations.
  • Halo 2: Behavior Trees.

  • Nintendogs: Gesture & Speech Recognition.

  • F.E.A.R.: Planning system based on STRIPS, “Environments to showcase the AI.”

  • Forza: Success story for neural networks, Drivatars — many years of research in Cambridge.

  • Colin McRae 2.0: also Neural Networks John Manslow, Jeff Hannan 2001.

  • Facade: Interactive Story and Natural Language Parsing, absorbed characters by design.

  • GTA 4: Pedestrians, ambiant AI.

  • Left 4 Dead: AI Director, moods tensions, enemy spawning.

Steve finished by saying that in student votes for the best AI, the most votes were gathered by Halo, Half-Life, and F.E.A.R. (This was a relatively small sample though.)

Preparing for the Future: As a Professional Game AI Developer, What Should I Know?

People: Michael Dawe, John Funge, Brett Laming, Steve Rabin, and Robert Zubek


What tools and techniques must a professional game AI developer know? On a multi-million dollar commercial game, what expertise will your colleagues depend on you to provide? In addition to understanding the requirements and expectations within the game industry, this panel provides guidance for the many game AI courses that are springing up at universities throughout the world. Panelists include both working game AI developers and game AI instructors from the University of California Santa Cruz and the DigiPen Institute of Technology.

Photo: Steve Rabin, Michael Dawe, Brett Laming, Robert Zubek, and John Funge

Photo: Michael Dawe

Photo: Robert Zubek


John Funge splits his course into Perceiving, Acting, Reacting, Remembering, Planning, and Learning, then schedules student Projects. He emphasizes that “Doing is the most important.” Steve Rabin structures his course as AI Architecture, Movement, Path finding, Agents and Animation, Tactics and Strategy, Learning and Adaptation, then also includes Projects.

Some interesting quotes were that “Flocking isn’t used that much in games.” so it’s not necessarily taught. (This is my experience too, although it gets mentioned a lot. Even beyond that, steering behaviors aren’t always the best way to solve a problem.) Also, “A-Star as a basic thing; you must implement it.” even though the panelists admitted it’s actually it’s more of an academic exercise.

Robert Zubek emphasized the benefits of robotics research and systems, for instance Rodney Brook’s subsumption architecture and behavior based robotics. Michael Dawe, former student of Steve’s, emphasized the importance of projects too, and explained that students at Digipen repeatedly form different teams of 4 to ship games every year.

Brett Laming has a strong academic background, and emphasized simple & elegant patterns that come together in a whole system. He also hit a very important point: basic math like algebra and geometry, 3D, graphics, and matrix operations are absolutely essential and not many students are comfortable with these concepts. He also discussed the need for practical skills such as testing and debugging, and learning from mistakes to realize that “Finite-state machines suck.”

Concerning machine learning (ML), I asked a question about this myself as I wasn’t entirely happy with their original emphasis on ML as just another technique. Many beginners naturally gravitate towards genetic algorithms (GA) and neural networks (NN) with the expectation of getting something cool & useful out of them. I don’t think these technologies are quite there yet, and students should be informed to prevent waste of time and disillusion: ML is risky, it’s not used often, and it requires significant knowledge and expertise. Michael said you need to learn about ML to understand the pitfalls. John agreed you should warn about the expertise required, and Brett was against them for the same reason as I listed — though they might be acceptable for offline learning. Robert said ML is used for completely different problems that aren’t often related to game AI, namely, function approximation.

In terms of a teaching framework, Steve uses a DirectX sample as the basis for his AI, which has many advantages including being simple to setup and easy for students to pickup. Brett emphasized the importance of getting up from the ground (to solve non-trivial problems) but without necessarily going next-gen. John mentioned he was considering using Flash to help with development, although going against the industry standard C++ wasn’t seen as a good thing for educators to take on.

Animating in a Complex World: Integrating AI and Animation

People: Alex J. Champandard and Christian Gyrling


As leading-edge games push the number of animations for a single agent into the thousands, significant challenges arise in allowing the AI to express themselves and give real performances in a physically constrained world. With this level of complexity, AI animation selection evolves into a new beast where AI, biomechanics, collision detection, and art become intertwined, resulting in significant engineering and artistic challenges.

This lecture examined these challenges and suggested strategies and architectures that can guide games into this rapidly evolving space.

Photo: Alex J. Champandard

Photo: Christian Gyrling

Animating in a Complex World: Integrating AI and Animation
Christian Gyrling and Alex J. Champandard, AI Summit GDC 2009
Download PPT


I was part of this session, so didn’t take any notes. However, here’s my take-away from Christian’s part of the talk. Naughty Dog uses the following technology / approach:

  • The character logic is split from the AI, and there’s a clear interface between the two. AI handles gameplay related tasks only, and the character makes it happen.

  • A navigation mesh is not used directly for pathfinding, it is rasterized to a local 2D grid on the PS3’s SPU and used as the basis for the pathfinding.

  • Scripting is built in to the system, there’s a clear way to request specific actions rather than having special hacks for overriding control.

What you can take away from and the lecture as a whole (including my part) is that:

  1. Having a separate layer to handle the animation logic helps keep things simple.

  2. You’ll need to think about the interface to make sure you can get information to make good decisions, then actually execute those requests, and monitor the progress.

  3. You can reach high-quality animation and at the same time remain responsive to gameplay requirements!

  4. Locomotion is a big part of the animation problem generally, but luckily it’s very well studied.

2008 AI Postmortems:

People: John Abercrombie, Eric Grundstrom, Neil Kirby, and Matt Tonks


Programmers from SPORE, GEARS OF WAR 2, and BIOSHOCK share their experiences working on these successful titles. What was it like working on these games, what innovations were unique to each title, how were risks managed, what went right, and what went wrong?

Photo: Matt Tonks, John Abercrombie, and Eric Grundstrom

Photo: Neil Kirby

AI Post-Mortem: Spore
Eric Grundstrom, AI Summit GDC 2009
Download PPT
AI Post-Mortem: Gears of War 2
Matt Tonks, AI Summit GDC 2009
Download PPT

Editor’s Note: This session was completely full and closed off, and when I’d finished practicing my pathfinding talk for later I couldn’t get in. Stay tuned (RSS) for a separate review once we piece together the information.

AI and Designers: Mind the Gap

People: Alex Hutchinson, Soren Johnson, Joshua Mosqueira, Adam Russell, and Tara Teich


Game design and AI development have always been close relatives. Indeed, defining a line that separates the two is almost impossible as one cannot exist with the other — a feature that the AI cannot handle, for example, is worthless, and the behavior of the AI itself is core to a game’s pacing, challenge, and feel. Thus, almost every decision an AI programmer makes is essentially a gameplay decision, yet AI developers are neither hired as nor trained to be designers. On the other hand, pure designers are often at the mercy of AI programmers to turn their broad strokes concerning AI behavior into reality and have few options if the outcome is wrong.

This panel, explored ways to manage this gap between designers and AI programmers to help establish better practices for this important (and inevitable) collaboration.

Photo: Soren Johnson

Photo: Tara Teich

Photo: Alex Hutchinson, Joshua Mosqueira, and Adam Russel

The First 10 Minutes…

Soren started the discussion by asking how much we design for ourselves (as players), compared to how much design for the AI. Alex replied by saying that good AI is built to be beat. Tara, however, pointed out that AI must still look smart and not seem like it’s being defeated on purpose; this is a tough balance from a programmer’s perspective.

Josh continued emphasizing the perception of intelligence to the player; in Company of Heroes adding speech made a huge difference in the perception of intelligence. Adam talked about designing the pub AI in Fable from the wrong perspective, without the player being in the simulation. Alex agreed with this, that the view of the player should be used to design the behaviors.

Tara pointed out that simulation is indeed cool, but we tend to get drawn into it too much as engineers. Soren asked, “What is the AI for?” and explained how the design of AI features in Civilization was driven by their use in the overall design. Tara talked about how rand() is often good enough; don’t write too much code.

Adam discussed the self-indulgent aspect of building game AI: it’s often a game in itself! Alex thinks you must design games in a systematic way, with optional special cases (these are interesting and memorable); a game needs regularity or it will collapse under its own weight.

Editor’s Note: This panel was fascinating and packed with information; we’ll cover the remaining 50 minutes in the near future to keep this post short. If you’d like to be notified when this is published, feel free to register on the site (no cost). We’ll also run a (public) live online debriefing session this Saturday, 11th and talk about this.

Toward Solving Pathfinding

People: Alex J. Champandard, Kevin Dill, and Chris Jurney


Modern hardware has helped pathfinding take one big step forward, with additional processing power and memory, along with larger development teams. However, the additional complexity of next-gen environments has caused us to take one huge leap backwards; dynamic obstacles, changing environments, and large crowds are sending NPC’s navigational capabilities back a few generations.

Indeed, Damian Isla, Lead AI on HALO 3, recently said “pathfinding will always suck.” This session brings together the best developers to address his statement head on and discuss the technical challenges of solving pathfinding in a large dynamic world.

Photo: Kevin Dill

Photo: Chris Jurney

Photo: Chris Jurney, Kevin Dill, and Alex J. Champandard

Toward Solving Pathfinding
Alex J. Champandard, Kevin Dill and Chris Jurney, AI Summit GDC 2009
Download PPT

Characters Welcome: Next Steps Towards Human AI

People: Phil Carlisle, Richard Evans, Daniel Kline, Dave Mark, Borut Pfeifer, and Robert Zubek


AI characters can be beautifully modeled and animated, but their behavior rarely matches their life-like appearance. How can we advance the current state of the art, to make our characters seem more believable? What kinds of human behaviors are still missing in our AI, how can we implement them, and what challenges stand in the way?

This session discussed practical approaches to pushing the boundaries of character AI, past successes and ideas for the future, with an experienced panel representing a wide range of perspectives and games.

Photo: Dan Kline

Photo: Phil Carlisle, Borut Pfeifer, Richard Evans, Dave Mark, (Dan Kline), and Robert Zubek

The First 10 Minutes…

Phil started off by emphasizing the importance of actually testing and checking if players can see, perceive and notice emotions. His research goes into this. My question beyond that, is does it actually make the game fun too? Dave would like to see quality behaviors in between the scripted scenes,

Richard’s interest is in extending Manslow’s hierarchy of needs, in particular towards a social direction. Beyond that, Borut wants to see the AI evolve over time as their relationship with the player changes. Dan emphasized the importance of giving feedback to the player, in particular how to express this feedback.

In the context of better transitions and better portrayal of behaviors, Rob asks “Why aren’t we doing this!” Dave’s explanation is that we (AI) are in the middle of a pipeline, limited by the game design and the animation department. Borut counters by the fact we need to emphasize current solutions now; there are many elements we can play around with.

Phil brought up the concept of iconic behavior as a solution; keeping it simple and stylized, almost cartoony. Richard metioned we can work with text until there’s more representations available. Rob asks about the assumption of going for stylized / iconic behavior; how can we express subtle emotions? Dan brings up the complexity of the animation system in Basketball games as an example; we’ve nailed it there, there’s a huge amount of depth! Phil points out that Disney et al. have been experts at expressing subtle emotions with cartoons for over half a century now.

Dave discussed Malcom Gladwell’s blink, and experiments with manekin that showed humans notice the difference in subtle details. Borut instead says you need to have a results-oriented understanding of what you do, not just unperceptible subtleties. Dan discusses how it’s important to define what makes the performance (for your game), but time limitations always kick in.

Editor’s Note: This panel was also fast-pace and stuffed with information; if you’d like to read our coverage of the remaining 50 minutes, don’t hesitate to sign-up to the site (at no cost). We’ll also run a (public) live online debriefing session this Saturday, 11th April and talk about this.

Breaking the Cookie-Cutter: Modeling Individual Personality, Mood, and Emotion in Characters

People: Phil Carlisle, Richard Evans, and Dave Mark


As game characters engage in deeper interactions with the player, subtlety of behavior becomes more important. However, in worlds that feature hundreds of characters, the homogeneous ‘cookie-cutter’ approach of modeling those characters becomes evident, leaving the world feeling repetitive and shallow. Everyone acts the same.

Using examples from games such as The Sims 3, the presenters showed how characters can be algorithmically endowed with distinct personality differences so that every one acts as an individual. Also explored were how personality, mood, emotion and other environmental factors enable individual characters to select from a wide array of context-appropriate choices and actions. This session concludes with how these behaviors can be expressed through animation selection so as to be more engaging and immersive for the player.

Photo: Phil Carlisle

Photo: Richard Evans

Photo: Dave Mark

Breaking the Cookie-Cutter: Modeling Individual Personality, Mood, and Emotion in Characters
Phil Carlisle, Richard Evans and Dave Mark, AI Summit 2009
Download PPT


Both Phil’s and Dave’s part of this session were summarized in our video report of the AI Summit. What follows is a detailed outline of Richard Evans’ part which focuses on The Sims 3.

For The Sims 3, the developers wanted personalities immediately obvious without you having to keep statistics. So, traits affect probabilities, traits traits affect affordances (use of objects), and traits provide abverbial modifier: mutter while walking, or trip often. Richard showed a video of a woman walking in the park, whose behavior was obviously flirty, both driven by such motives and flirtatious in her behavior.

Data driven techniques are key to building such modular traits. There are many different motives and they are actually changing over time too. In comparison, The Sims 1 & 2 both had fixed motives. Richard shows a video of a picknick, where an obnoxious Sim keeps trying to ruin a family’s lunch in the park.

Richard also talked about how multiple goals combine together in the context of social situations; while your at a picknick things are expected from you, likewise when you’re given something you might be expected to pass it along. These goals and behaviors are emphasized by the situation itself. Richard talks about fine grain actions, in particular how these help build and portray these traits.

Finally, Richard discusses the production rules (expert system) used for social interactions. He uses an example for reacting to a joke; if you have a sense of humour it’s funny, with repetition it’s boring, but by default it’s neutral. The Sims 3 actually learn about other Sims through situations, by observing these rules fire! So the whole game is an exploration of traits that you can induce from the different interactions. In this way, the traits form the story and affect its tension.

When Good AI Goes Bad: Tools, Techniques, and Strategies for Testing and Debugging AI

People: John Abercrombie, Phil Carlisle, and Alex J. Champandard


Designing and building AI logic is less than half the battle. Over the course of a project, developers will spend most of their time testing the AI, ensuring it works, fixing what is broken, and endlessly tweaking it to meet the game requirements and the designer’s vision. While the programming aspect of game AI is well documented, testing and debugging is still very much a dark art.

This panel explored strategies and architectures that AI programmers can employ as well as strategies that QA departments can leverage to ensure validity and compliance.

Photo: John Abercrombie

Photo: Phil Carlisle

Photo: Alex J. Champandard


I was moderating this panel-style tutorial session, so I have only handwritten notes I used for preparation at the moment. We also don’t have the rights to distribute the unmodified slides, and this session was also not recorded for legal reasons. However, we’ll be providing some kind of coverage of this session as part of the build-up to the relaunch of our redesigned site & Premium area. If you’re interested feel free to join this newsletter and we’ll notify you soon.

From the Ground Up: AI Architecture and Design Patterns

People: Brett Laming


AI is rapidly evolving, getting more intricate and becoming increasing multi-disciplinary. With greater expectation comes the need for practical proven AI architectures, clean robust engineering, and fast dependable production. Reuse and reliability is key but in a genre/platform specific world where do we draw the line? Where are these magic ‘textured triangles’ of AI?

This lecture, calling on observation and evolved industry experience, discussed the potential multi-title, multi-genre architecture that now adds GTA Chinatown Wars to its history. Working from the ground up, it highlights and justifies the common patterns that emerge across genres, platforms, and even different trends in AI architecture. Tying in key elements of the summit it hopes to provide an insight into the thought processes behind AI architectural design regardless of title or genre.

Photo: Brett Laming

From the Ground Up: AI Architecture and Design Patterns
Brett Laming, AI Summit GDC 2009
Download PPT


Brett’s talk covers the MARPO architecture that he wrote about in depth in AI Game Programming Wisdom 4. Brett and I have a long standing gentleman’s disagreement about this approach, which goes all the way back to my review of his article for this book, and which we discussed at length during the party on Tuesday… I will do my best to present his architecture to the best of my ability in an objective manner — since we resolved our differences over a beer!

  • Brett’s architecture relies on a separation of the simulation and its control via an interface he calls the Yoke, which is like an extensible virtual controller.

  • Different parts of the logic to control an actor can be spread through different skills or smart objects. This means there’s a clear way to manage the controllers when an AI is inside a car.

  • On top of this, Brett uses a “top-heavy” behavior tree structure. This reduces modularity by putting more responsibility into the parent behavior (picking the child), but this allows him to reduce branch expansion — a useful optimization for the DS platform.

  • The result of the execution of a behavior tree (like in MARPO) is a stack of tasks / behaviors. Brett has three of these stacks for multiple priorities: long term, medium term, interrupt.

As I mentioned at the end of a controversial question to Brett, Façade has some interesting patterns concerning parallel tree execution. This seems similar, except MARPO does this more efficiently.

My only disagreement with this (otherwise very sensible) architecture is this: while the “virtual controller” interface separating AI and character has many production benefits, it can cause more work when building the AI — especially the controllers — and often results in lower quality animation due to the reactive nature of the interface. Also, this virtual controller is a leaky abstraction that can’t always cleanly separate AI and player, and many hacks result from such designs in practice.

The Photoshop of AI:
Debating the Structure vs Style Decomposition of Game AI

People: Chris Hecker, Damián Isla, Borut Pfeifer, Steve Rabin, and Stuart Reynolds


At GDC 2008, Chris Hecker put forth a fascinating, mind-bending theory about hard interactive problems. He claimed that solutions to such problems usually have a deep structure vs style decomposition. Graphics has many of these decompositions (one example is the texture mapped triangle) where the computer can reason efficiently about the structure and artists can apply the style using tools, which has allowed computer graphics to blossom and significantly impact games. Similarly, AI is a hard interactive problem, and the theory predicts that powerful solutions to the AI problem will have a structure vs style decomposition, which would allow AI to make an equally large impact.

Can AI be decomposed in such a way so that someday we might have a Photoshop of AI? This panel explored the possible structure vs style decompositions for AI and discuss what a Photoshop of AI might look like. Prepare yourself for an epic philosophical rant!

Photo: Borut Pfeifer

Photo: Chris Hecker

Photo: Steve Rabin, Damián Isla, and Stuart Reynolds


I discussed this session with Dan Kline at the GDC Speaker’s party, and he mentioned that he felt the AI Community should have reacted to Chris Hecker’s comments last year. Personally, I’ve been very happy using very low-level building blocks for my behaviors trees (documented here, free registration required), and Dave wrote about this in a blog post here. However, beyond improving my usage of behavior trees, I didn’t feel this issue that Chris brought up was holding anyone back… Yet, given the panelists I was curious about their thoughts on the subject!

Chris started off by trying to explain the idea of the structure and style, re-stating the problem as he spoke about it last year and built into the question: “What is the photoshop of AI? This set the scene for Steve and Damián’s answer; both independently came to the conclusion that “Improvisational acting is the photoshop of AI.” You start with a virtual character, and tell it to do something. As it acts the scene out, you correct its actions, explain things to it, and try again until you reach the desired results.

Of course the challenge of this approach is the interface, in particular the natural language interface — otherwise you just end up with a “bunch of sliders” to adjust. The concept of having many sliders was basically a recurring theme in the discussion. The sliders can represent the style, and the underlying logic could be the structure, but this distinction is not as clearcut since you can use the sliders as boolean with a threshold control the “structure” of the code, etc.

The rest of the debate stumbled on the topic machine learning. Stuart’s first premise, based on his middleware company’s product, is that it’s possible to solve the problem of extracting style from a dataset. The rest of the panel was suspicious, because the hard part was in place already: the structure for the learning in which features to use. Also, behavior capture with machine learning isn’t that simple because you need to be able to “blend” — and again this requires structure to do well. Stuart then offered that machine learning could help with this part also, but that didn’t go down well and resulted in a few misunderstandings.

Extra few points made during the panel:

  • Borut said that balancing style vs. structure is part of the day job. You’re supposed to do it to build your own AI. The question is can we either find common patterns or techniques to do this?

  • Damian expressed that the job is to look for orthogonalities (i.e. independent concepts); there’s always an interplay of style and structure (the difference is hard to understand) so it will happen at many levels as you break down a problem.

  • Chris mentioned that top-down standardization was a terrible idea, if it was even possible at all. Even the outcome is questionable for him: the foscilisation of a sub-industry. He says no to AIISC.

  • Rob Zubek, ask a question, helped cement the discussion in the real world by using pixel arrays in Photoshop as an example. They can be built and transformed. What are those for AI?

Overall, this panel opened up more questions and left many loose ends untied. Next year I’d love to see a bottom-up approach to this, focusing on how to build reusable, composable behaviors that can be manipulated — maybe as a set of microtalks!

Beyond Behavior: An Introduction to Knowledge Representation

People: Peter Gorniak and Damián Isla


As a field, we have spent a lot of time worrying about how our characters behave, and have come up with various techniques, including HFSMS, behavior trees, and GOAP planners, for dealing with the scale challenges that complex behavior presents. However, we have spent considerably less time worrying about what our characters KNOW. After all, if our AI is to make decisions about what to do, we must figure out what information the AI bases its decisions ON. The format, organization, and updating of that information is the problem of Knowledge Representation. Moreover, if architected appropriately, it can result in a great reduction in behavioral complexity, thus making behavior authoring easier and your game more fun. Knowledge representation is one of the great unrecognized problems of game AI.

This lecture presents a number of systems for KR drawn from both academic and industrial sources. Damián Isla provided an in-depth look at the architecture of the AI for Bungie’s Halo, and the role that KR plays in that architecture. He also presented at several academic techniques for KR, including Occupancy Maps and Semantic Nets. Finally, Peter Gorniak took a look at a tantalizing topic for the future of AI and games: the interaction of KR and natural language. Attendees came away with an appreciation for the role that KR can play in creating compelling AI and gameplay, and with some concrete places to start their own exploration of the topic.

Photo: Damián Isla

Photo: Peter Gorniak

Knowledge in Action
Peter Gorniak, AI Summit GDC 2009
Download PPT


Damian Isla kicked off this highly anticipated lecture with some statements.

  • The purpose of KR is to separate the thing itself from its reasoning.

  • KR is fun for the player to think about AI and tricking AI.

  • Players are good at building “Theory of Mind” about others.

  • KR has different timescales: now, forever, in between.

Information can have confidence (How sure am I?), salience (How important is this for me?) and predictability which makes it easy to reason about.

Damian then showed a demo, built in a few days from his 2D prototyping farmework, that was heavily insipired by his work at MIT on Duncan the Highland Terrier. He showed how simple behaviors in a 2D stealth game could be enhanced significantly using a better representation, in this case an occupancy grid. (Note that you’ll score points with AI geeks if you remember this name, it’s not an influence map. Occupancy grid.) Damian says this is like Doom-level AI with good KR. Damian explained confusion (which made a big difference in his demo) is surprise about KR differing from the world itself.

Damian then discussed the popular concept of target lists, maintain a confidence level as a float rather than an enumeration. This list allows AI to make mistakes. He suggests keeping derived data (by reasoning) separate from the perceived data. He showed an example how you can use this to trick the AI in a stealth-like game; this makes the player feel smart.

Since Phil Carlisle asked about memory, Damian discussed this briefly:

  • Working Memory: Kept with the behavior tree and discarded.

  • Short-term Memory: Connected to time.

  • Episodic memory connected to salience.

Challenge #1: Representational Versatility. Damian mentioned that you need a polymorphic solution to KR to support everything you need. This reminds him of the percept DAG of the MIT Media Lab. Challenge #2: Performance. Representing lots of knowledge is expensive, so use shared KR, combined shared & personalized KR. In retrospect, I should have asked about hierarchical KR for squad sharing information for example. Damian described his salience threshold approach to KR.

Peter Gorniak took over talking about the importance of separate representations, again like grandpa Minsky suggests. But you need to think about how they inform each other. Peter suggests trying to keep separate KR, it has benefits — though it’s not always possible. You should go back and look at old KR research now, it makes sense for game development though robotics/AI people may have discarded it.

Predicate Logic Knowledge was the next big topic. Think of it as a declarative scripting language that’s more of database you can query with logical statements. It looks like this:

(dead "John")
(restraining "Terry" ?v)

Peter’s previous work focuses on using compilers to create hard-coded domains using C++ enums, pre-allocations, fixed sized arrays. This means you can get the benefits of such First-Order Logic without too much cost. He showed it working within the context of a first-person shooter game he’s working on at Rockstar Vancouver.

In the next part of his talk, Peter also showed how you can use plan recognition to figure out what the player is trying to do. You can basically parse the current actions in the context of an existing top-down tree (like a behavior) that describes what the player can do. This means you can list the possible things a player is doing, for example when he interacts with a door, which provides new meaning to the concept of affordance.

The third part of Peter’s talk was his previous research about natural language processing and speech recognition. He showed the system recognising ambiguous speech to attack a bear and a barrel context-sensitively.

Parallelism in AI: Multithreading Strategies and Opportunities for Multi-core Architectures

People: Julien Hamaide


With computing now relying on parallelism for efficiency, how does AI fit into this brave new world? How can AI algorithms leverage this expanding resource and what architectures enable AI to become even more parallel? An even more interesting question might be, What new opportunities does this processing power open up for AI?

This lecture looked at the practical elements of multithreading AI algorithms on today’s hardware with an eye toward performance and debugging, while looking toward some interesting opportunities in the future.

Photo: Julien Hamaide

Parallelism in AI: Multithreading Strategies and Opportunities for Multi-core Architectures
Julien Hamaide, AI Summit GDC 2009
Download PDF


Julien’s talk was very enlightening, tying together many aspects of multi-threading and framing them within the context of game AI. His slides are extremely detailed, and provide a better overview for those interested in such specific details. However, the one big take-away for me was that you need to be the most careful about your memory, both alignment, layout, usage, compation, caching, making self-contained structures, etc. This is the single biggest factor for speeding up multi-threaded code — otherwise it could just end up being slower than a single threaded version!


Photo: Ai Summit attendees queuing up for questions

The Summit went very well and the AI Programmer’s Guild will organize the event next year. It’s also very encouraging that it worked so well as a “standalone” event; there were no schedule clashes, everyone was in similar location for the whole two days, and the mood was very friendly.

For me, this is very encouraging for our Paris Conference on Game AI on June 10th and 11th. We’re bringing together the best developers from Europe at the CNAM, including people from Guerrilla, Crytek, Black Rock Studio, People Can Fly, Recoil, 2K, CGF-AI and many middleware developers based from France and Germany. Scheduled talks will feature the AI Bots in Killzone 2, the Racing AI for Pure, CryEngine’s Behavior Trees, HTN Planning in Strategy Games. The event is completely free, and you’ll be hearing more about it on the blog very shortly. In the meantime, if you’d like to join us, you can head over to this page; sign-up first (if you haven’t already) then register for the conference.

Discussion 0 Comments

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!