CIG is a refreshing conference to my mind because you can put any three attendees in a room and get four different takes on what the conference is about. There are the Game Theorists, talking Nash Equilibria and mixed strategies (Hingston, 2009), the Serious Games people dealing with a more simulated what-if approach to the world — in scenarios such as Disaster Relief or Naval Strategy (Avery, Louis, & Avery, 2009). There are researchers working on a whole range of more classical board and card games such as Poker, Diplomacy or Risk (Kemmerling et al., 2009). And of course, there are the Video Game researchers — but even within this group there are further classifications to be made between those using games as a controlled environment in which to develop better AI techniques (Thompson & Levine, 2009), those using AI to push forward the use of technology in contemporary games (Galli, Loiacono, & Lanzi, 2009), and those using the marriage of advanced AI methodology with video games for some other end, such as behavioral modelling of players (Drachen, Canossa, & Yannakakis, 2009) or automated content creation (Tanimoto, Robison, & Fan, 2009).
Even within our own specific niche, our community remains very diverse. Not only diverse but also of a very high quality — it's hard to single out specific pieces of research to point to as being worth mentioning, because it almost certainly means excluding something else that deserves to be here. You can see the full proceedings here to make sure you don't miss anything. (Also see the bottom of this article for the full references from the previous paragraph!)
By far the most interesting paper to me was describing an application of AI to creating an agent to play Magic: The Gathering (Ward & Cowling, 2009). I'm sure that many of you will recall Magic fondly from your childhood or adolescence — many more of you might remember it more as a significant money-sink that consumed your entire life for a period. The paper presented an approach to a subset of the game, using only the "creature" spells (i.e. those that summon minions which you can use to attack the opponent or use to defend against such an attack) and was designed using a Monte Carlo based approach. This plays out a set of games from the current state using the UCT algorithm to simulate play in order to gather a sample of how the game is likely to turn out based on your choice of action at this point. So far the results of applying this to the slim-line version of Magic have proven pretty promising, and it will be very interesting to see this develop into more advanced versions of the game — not only to develop great automatic Magic players (a challenging goal in itself), but also to prove the power of the approach in an environment with extremely complex interacting effects — if it proves effective then it could reinforce even more the potential of Monte Carlo for game strategies in general.
Monte Carlo Search Applied to Card Selection in Magic: The Gathering Ward, C. D. & Cowling, P. I., CIG 2009. Download PDF
Another great paper showcased an approach to extracting data from Starcraft replays in order to predict the strategy being used by the opponent (Weber & Mateas, 2009). Broadly speaking the idea is to characterise a replay as a feature vector reflecting key points within the game such as when building are create and units trained. This allows for an analysis of the manner in which the tech tree is being expanded, which gives the option to predict what the aim of a player is — as a naïve example, they are unlikely to focus on unlocking high-end units unless they intend to use them for example, although the technique presented is much more powerful than just this level of observation.
Generalising these into more broad strategies for classification can then identify trends within games that can be used, as well as invariants, particularly with an emphasis on the different game races being represented by the players. For example, in Terran vs Protoss games, a trend was identified where the vast majority of Terran players would create a Factory type building at around four minutes into the game. These trends can then be isolated and named to provide a broad description of a strategy being used, which can then be used to analyse the frequency with which it is employed in different. It was then shown that using this data, machine learning techniques could be used to predict strategies as early as five minutes into the game with a fair degree of accuracy (70%).
A Data Mining Approach to Strategy Prediction Weber, B. & Mateas, M., CIG 2009 Download PDF
Many of you will have seen the recent interview with Robin Baumgarten talking about his recent entry in the Infinite Mario competition, but one paper presented a complete inversion of the same basic toolkit. Instead of creating an agent capable of playing any level generated of Mario Bros, it sought to identify characteristics of a level that appealed to players in an effort to allow content designers to better understand what they are making (Pedersen, Togelius, & Yannakakis, 2009). In order to do this, it generated a range of levels and humans were invited to play through them. By capturing the behavior that these players exhibited, along with feedback on their opinions, a neural network was able to map the characteristic traits of the level to a range of metrics such as "challenge", "frustration" or "fun". Although by their own admission, the approach isn't quite there yet, its definitely one to watch as it gets enhanced.
Modeling Player Experience in Super Mario Bros Pedersen, C., Togelius, J. & Yannakakis, G. N., CIG 2009. Download PDF
I also wanted to mention a paper that presented an approach to evolving multi-modal behaviors which was awarded Best Student Paper at the conference (Schrum, Miikkulainen, & Member, 2009)?. The work presented an approach to evolving different behaviors within a neural net by introducing distinct operators that could be used to duplicate the network and introduce new connections between nodes. The example used was a small demonstration called "Fight or Flight" in which NPCs were evolved to either fight the player or run away from the player. The aim of the game is for the NPC team (four agents) to kill the player, however there are two distinct modes:
Fight, in which the player agent is equipped with a weapon to fight back and damage the NPCs, and
Flight, in which the player agent has no weapon, making the NPC agents invulnerable.
This gives rise to two competing behavioral techniques, one cautious and one aggressive, which both must be expressed. Using the new mutation operators they proposed, the authors were able to evolve agents capable of tackling either task with great success, meaning that this new approach to evolving neural networks for more complex domains is definitely looking promising.
Evolving Multi-modal Behavior in NPCs Schrum, J., Miikkulainen, R. & Member, S., CIG 2009. Download PDF
The Diversification of AI
The highlight of the conference for me was how seeing how powerful AI techniques are becoming as they begin to be applied in much more interesting and non-obvious ways. The award for Best Paper was presented to Hastings, Guha and Stanley of the University of Central Florida for their work on Galactic Arms Race (Hastings, Guha, & Stanley, 2009), summarised in an excellent paper and presentation given by Stanley. Galactic Arms Race was previously covered on AiGameDev.com in this article, so I won't go into too much detail. However, the broad idea is that rather than focusing on using AI to build harder or more realistic opponents, the UCF team has instead created a game in which the weaponry evolves — the chromosomes define the mechanics of the weapon: spread, rate and even aesthetics and then fitness is evaluated based on the amount of use a particular weapon receives. Based on this a new group of weapons are made available for the players to collect (or ignore) at the next iteration, and the process repeats itself.
GAR is played on centralised servers, so the research team has access to see the results of the evolutionary algorithms first hand, and rumour has it that the game will shortly be available through XBox Live. The game itself was developed inhouse by UCF, but despite this the visuals remain of quality fully comparable to industry — a far throw from what you might expect from an academic project, but proof positive that academia can create a high quality gaming experience around a specific research area (although this is somewhat less surprising given the team's track record; Stanley was the originator of the NERO project implementing the rtNEAT algorithm for trainable squad-based battles).
In contrast, the keynote by Microsoft Research's David Stern was something of a letdown for me; although a great presentation in its own right, it covered three main areas in which AI techniques have been applied to gaming. Firstly, the Drivatar system built into Forza Motorsport, in which gamers could "train" the console to drive the way they do, and then allow some races in the long career-mode to be played by simulation. This training is done using Neural Networks and attempts to replicate the way a player handles specific situations and types of track elements such as hairpin bends or chicanes. All well and good — interesting even — but Forza shipped in 2005.
Secondly a Reinforcement Learning approach to fighting games was demonstrated, using Tao Feng: Fists of Lotus. This was a very interesting section of the presentation, being able to watch as reward tables were updated based on the AI's evaluation of what moves worked in given situations, along with video to highlight it in action at various stages in its learning, served as a great demonstration of the power of the approach. However, Tao Feng shipped in 2003, and regardless, this system did not end up being part of the shipped product. Finally there was discussion of the MS TrueSkill system for player matching and ranking — a nice application but again, something that has been built into all XBox Live titles since 2005.
On a personal level I found these demonstrations interesting, and getting a look "under the hood" was quite informative, but there does seem to be something amiss when one of the biggest industry research labs is not able to show any work more recent than four year old research that has already been reasonably well disseminated. To some extent I believe (and certainly hope) that this lag is more due to commercial concerns of documenting ongoing projects than an entire research team resting on their laurels, but it was very noticeable for such a big name to be highlighting titles now a full generation behind that being discussed elsewhere at the conference.
These two examples demonstrate the extreme ends of a full spectrum of AI which is starting to come to the fore, and AI research being taken increasingly seriously. It was recently asserted at the IEEE AI/Games Networking Event held at Imperial College in June 2009 that 90% or more of AI in games comes down to A*. I think it would also be fair to say that a significant portion of the remaining 10% are recent games starting to take advantage of proven AI technologies.
In general there is an increasing amount of interest and engagement between academia and industry. A few examples of this growing trend are the fact that Introversion's Defcon now includes an AI API to allow development of external bots, 2K Australia's healthy sponsorship of the annual "BotPrize" competition, and Eidos allowing data about XBox Live gamers playstyle to be recorded in Tombraider Underworld to enable player modelling. However it isn't just this industrial encouragement of AI as a side-project undertaken solely by academics and merely facilitated by contributions from industry that gives me hope for the future, it's something that seems pervasive right now, and its evident in all the articles you can read right here on AiGameDev.com.
It's palpable how much more complex things are becoming as AAA titles start to implement more modern approaches to AI — perhaps finally this is the shift towards fulfilment of the Physics and AI Game Developers' often heard cries that surely this time, graphics have been pushed far enough and emphasis can be placed on other areas? Certainly, its true that the era of games having a single AI developer (if that) with a rudimentary understanding of a couple of algorithms from the 70's or 80's is over, and we are now all starting to talk the same language of architectures, models, evolution and a whole host of other modern techniques — and I for one welcome our new NPC overlords!
Player Modeling using Self-Organization in Tomb Raider: Underworld Drachen, A., Canossa, A. & Yannakakis, G. N., CIG 2009. Download PDF
Learning a Context-Aware Weapon Selection Policy for Unreal Tournament III Galli, L., Loiacono, D., & Lanzi, P. L., CIG 2009. Download PDF
Evolving Content in the Galactic Arms Race Video Gam Hastings, E. J., Guha, R. K., & Stanley, K. O., CIG 2009. Download PDF
Iterated Prisoner’s Dilemma for Species Hingston, P., CIG 2009. Download PDF
A Game-Building Environment for Research in Collaborative Design Tanimoto, S. L., Robison, T. & Fan, S. B., CIG 2009. Download PDF
Realtime Execution of Automated Plans using Evolutionary Robotics Thompson, T. & Levine, J. CIG 2009. Download PDF