One of the (few) perks of being a research student is attending conferences. There are many reasons to attend a conference: you get to travel to places you probably wouldn't otherwise visit, toss around ideas with other people who work in the same research area and get a glimpse into the state of the art approaches for problems you might not ever have considered.
Last week I was fortunate enough to attend the 2010 Conference on Artificial Intelligence and Interactive Digital Entertainment (otherwise known as AIIDE). A relative newcomer to the scene (currently in its 6th iteration) AIIDE has developed a bit of a reputation as being the venue at which the games industry and AI academia meet. From a student's point of view AIIDE is attractive for two reasons:
There is a strong program comittee and rigorous peer-review process (so publishing here looks good on your CV).
It attracts interesting and influential keynote speakers (such as industry icon Peter Molyneux in 2007 and pathfinding guru Paul Tozour in 2009).
This year's event, held at Stanford University from the 11th to the 13th of October, promised to be no different. On the cards were several interesting keynote speakers, a StarCraft AI competition and an entire day of pathfinding goodness (including my own humble contribution to speeding up search). Was I excited? You bet!
Day 1 Highlights
The conference kicked off in style with a wonderful keynote from industry veteran Chris Jurney (slides). Known for his work on games such as Company of Heroes and Dawn of War 2, Chris talked about some of the challenges he's faced implementing efficient AI systems in real-time strategy games. Perhaps the most surprising revelation (for me) was the critical role that pathfinding plays in Chris' work. For example:
The pathfinding system is invoked to answer distance queries that facilitate other AI features such as cover point identification.
Pathfinding is used as a mechanism for maintaining squad formations.
During combat, units constantly re-evaluate their paths to avoid overlapping with each other and to make sure they don't overshoot their targets.
Chris also touched on some of the difficulties he encountered when modeling different size units (a problem close to my heart) and finished by outlining an interesting rule-based melee system which was employed in Dawn of War 2. It was a complex dance of 8-10 weight-based heuristic features (such as retaliation, decision inertia and others) which all had to be balanced in order to achieve nicely coreographed battles of up to 100 units at a time. Impressive stuff!
Perceptually Realistic Behavior through Alibi Generation
People: Ben Sunshine-Hill, Norman Badler
One of the first talks of day and also among the most interesting. This paper deals with with the problem of generating intent (or alibis) for unscripted NPCs the player might be observing in large open-world games. The authors have developed a novel system capable of efficiently generating increasingly more detailed plans for NPCs so it looks like they know what they're doing rather than stupidly wandering around with no purpose.
Learning Companion Behaviors Using Reinforcement Learning in Games
People: AmirAli Sharifi, Duane Szafron, Richard Zhao
Duane Szafron presented a reinforcement learning approach for developing the behaviour of companion NPCs in story-driven games such as Neverwinter Nights. The main motivation of the work appears to be that party members should react to the player in a manner consistent with how the player has treated them thus far. Pretty neat!
Using Machine Translation to Convert Between Difficulties in Rhythm Games
People: Kevin Gold, Alexandra Olivier
A very engaging talk from Kevin Gold, this paper outlines an automatic system for analysing note sequences in 1-dimensional rhythm games like Guitar Hero. The aim is to take in the sequence of notes for a an expert difficulty song and automatically generate sequences for the easy, medium and hard difficulties. It's an interesting problem because, according to Kevin, game designers at Harmonix have to manually encode the different sequences of notes which appear for every song in each of the 4 available difficulties. The authors analysed the song corpus of Guitar Hero 1 and 2 (the notes are apparently stored as midi files on the disc) and developed a novel bayesian method for predicting which notes to drop when converting between difficulties. Perhaps the most remarkable aspect of the work is the accuracy of the reported method: in many cases the generated songs match the official versions over 80% of the time. Nice work guys!
Day 2 Highlights
A couple of interesting keynotes on the second day:
AI researcher Michael Young (from North Carolina State's Liquid Narrative group) discussed some of the challenges involved in automatically generating narratives. He touched a range of issues including story generation (and the fact that current methods produce scripts which are sound but not necessarily believable), discourse generation strategies and the closely related problem of automated cinematography.
Gaming industry veteran Chris Hecker talked about his latest project, SpyParty. This is an interesting game in which one player (the Spy) must complete a set of goals by talking to and interacting with a group of NPCs in a social setting -- all while remaining undetected by another player (the Sniper) who is observing the party in order to identify and kill the Spy. It's a kind of reverse Turing Test where in order to win the Spy must successfully pretend to be a computer. Chris went on to talk about some of the challenges he faced implementing SpyParty, and in particular the difficulties he's had modeling human-scale interactions with objects (there is apparently not much work in this area). Listening to Chris describe his efforts, it seems that the primary novelty which SpyParty delivers is that it aims to entertain through suspense and the playing of psychological games between the players rather through the usual action or fear mechanics which are so popular in the games industry. It's a brave and ambitious project in my opinion — the likes of which we see few and far between. Good luck with it Chris!
Another highlight of the day was a Man vs Machine match between one of the best-performing entrants in the StarCraft AI competition and former pro-gamer =DoGo= (a Brood War veteran from the 2001 WCG). The take-home message was clear: AI sucks. The computer opponent was easily crushed in a very one-sided match. Interestingly however, the same AI won more often than not when pitted against the default Blizzard bots; so it's probably more than match for most casual players.
An Offline Planning Approach to Game Plotline Adaptation
People: Boyang Li, Mark Riedl
Boyang Li discussed an interesting partial-order planning approach for tailoring role-playing game plots to particular player preferences. The idea is to feed an AI planner a set of player preferences (preferred quest types etc), a corpus of quests (made up of tasks, actions, rewards etc) and have it generate a sound and coherent story. It's a nice idea, particularly as the described system appears to be able to repair existing plots to deal with changing preferences.
Adversarial Navigation Mesh Alteration
People: D. Hunter Hale, G. Michael Youngblood
This work studies competing groups of units operating in a Capture The Flag environment. Each group of units is able to affect various changes on the world; for example, by constructing obstacles to stop opposing units from reaching their flag or tearing down obstacles to reach an enemy flag (or both). An interesting aspect of the research is that the pathfinding routines of each group of agents are modified to take into account the cost of making changes to the world geometry -- so, for example, rather than just computing the cost of going around an obstacle units are also able to compute the cost of going through it. It's a neat idea because it allows more complex agents to be created without actually adding complex agent design overheads.
Terrain Analysis in Real-time Strategy Games: Choke Point Detection and Region Decomposition
People: Luke Perkins
Another pathfinding-related paper, this work focuses on decomposing game maps (competition maps from StarCraft in this case) in order to identify choke points. The ultimate objective is to partition the environment into a series of different areas which can be used by higher-level AI routines to (for example) facilitate efficient navigation and coordinate strategic planning. The decompositions I saw looked very promising indeed; the algorithm appears to be very effective at partitioning the map into strategic areas (usually convex in shape and of near-maximal size) which are connected to each other only by transitions at choke-point locations. Nice job Luke!
Day 3 Highlights
The third day opened with a keynote from Microsoft Researcher Sumit Basu. According to the program, Sumit's research focuses on making music generation more accessible to non-musicians. I wish I could give a detailed account of the talk but sadly I missed it — I was too busy double and triple checking my slides for later in the day!
I did manage to arrive at the conference in time for the annoucements of the StarCraft AI competition winners however. 28 bots were submitted and four tournaments were held:
Tournament 1 (Micromanagement) was won by FreSCBot; an AI from two independent developers which makes use of a multi-agent finite-state machine approach.
Tournament 2 (Small-scale combat) was also taken out by FreSCBot.
Tournament 3 (Tech-limited game) was won by Luke Perkins' MimicBot which uses a mirroring strategy to play.
Tournament 4 (Full Gameplay) was won by Overmind. This AI, by a team of students at UC Berkeley, used a highly effective mutalisk micromanagement strategy to snare overall victory.
Aside from eternal bragging rights, each winner received a limited edition copy of StarCraft 2 which was signed by the development team at Blizzard. Well done guys!
A Comparison of High-Level Approaches for Speeding Pathfinding
People: Nathan Sturtevant, Robert Giesberger
This work looked at the efficacy of a number of different approaches for speeding up pathfinding. In his talk Nathan compared the hierarchical pathfinding algorithm used in the recent roleplaying game DRAGON AGE: ORIGINS, with two less known but very fast approaches: Differential Heuristics and Contraction Hierarchies. The former is an efficient memory-based heuristic that reduces pathfinding to a constant number of table lookup operations while the latter is a new technique which speeds up search by adding so called shortcut edges into a graph. The most interesting result, in my opinion, were the reported performance figures for the Contraction Hierarchies method. It appears that, when tuned correctly, this algorithm is several times faster than standard A* search and in some cases even competitive with the suboptimal hierarchical pathfinder. Nice!
DHPA* and SHPA*: Efficient Hierarchical Pathfinding in Dynamic and Static Game Worlds
People: Alexander Kring, Alex J. Champandard, Nick Samarin
This paper (see article), presented by Alexander Kring, introduced two variations on the well known HPA* algorithm. Both SHPA* and DHPA* aim to eliminate the insertion overhead associated with HPA* but the latter also makes use of lookup tables to improve the performance of search when traversing individual clusters. Alexander went on to discuss the applicability of DHPA* to maps in which the terrain features dynamic obstacles and touched on one of the less studied aspects of pathfinding that game developers often face: parallel processing.
Breaking Path Symmetries in 4-connected Grid Maps
People: Daniel Harabor, Adi Botea
My own submission (PDF) to this year's conference is a novel technique for speeding up pathfinding search. Rather than invent a new algorithm to go fast, we instead develop a technique for reducing the size of the search space by eliminating symmetry. Our approach is simple: we decompose an arbitrary grid map into a series of empty rectangular rooms and discard all tiles from the interior of each room — leaving only a boundary perimeter. Finally, we connect each pair of tiles on directly opposite sides of the perimeter. It turns out that this simple strategy can speed up A* (and indeed, any search algorithm) by 3-4 times without any loss of generality. I think the most exciting aspect of this work however is that it is completely orthogonal to all existing pathfinding optimisations. Given how simple and effective the technique can be, I hope that it will be noticed by performance hungry game developers in the near-future
Curious About RTS AI Research?
Editor's Note: On Sunday, we'll be running a (free) live online interview with Ben Weber, the organizer of the Starcraft AI contest at AIIDE 2010. It costs nothing to attend, all you need is a browser and flash. You only need to sign-up to the site, and go to the event page to set yourself a reminder from our brand new event page! See you this weekend.