Earlier this month, the IEEE Conference on Computational Intelligence in Games took place at the IT University in Copenhagen. Over 100 researchers from around the world showed up to present their very latest research. In between, there were inspiring keynotes from highly respected figures in the research community, with topics ranging from A-life and bottom-up aesthetics to top-down drama management. AiGameDev.com was there to cover the event, and I (Alex Champandard) gave a tutorial on the first day too.
The rest of this report includes some background about the conference (including things I learned from my first attendance to this event), the state of industrial-academic collaboration (and why many researchers were so depressed because of me), a report from the various competitions run at CIG 2010 (such as the 2K BotPrize and Mario AI), and finally further references so you can find out more about the research track and the papers presented.
Photo 1: The view just outside the ITU in Copenhagen, where the Computational Intelligence in Games conference 2010 was held.
CIG events have historically brought together researchers with a focus on game theory, evolutionary algorithms and neural networks, but now moving quickly into video games (e.g. 2D arcade classics or 3D action games), as well as diversifying the techniques in use. Current popular trends include monte carlo tree search and temporal difference learning. On the other side of the Atlantic, the AIIDE conference has historically focused more on the classical AI approach, such as logic reasoning and symbolic approaches.
“CIG has historically focused on game theory and evolutionary algorithms, whereas AIIDE focused more on classical AI approaches.”
Having followed the erruption of tensions at AIIDE on Twitter last year, from hearing feedback from previous attendees there, combined with last year's CIG report from Luke, I was a little apprehensive about the event; it was my first "big" academic conference about game AI. However, based on its success and the feedback, this year's CIG in Copenhagen will no doubt go down as one of the best so far, setting a very high bar for next year's event in Korea. The 2010 edition of CIG officially became a conference rather than a symposium, attracted the most papers with the lowest acceptance rate (higher quality papers), and further bridged the gap between the CIG and AIIDE community as well as with industry. I very much enjoyed those few days also!
The credit for this success goes to Julian Togelius and Georgios Yannakakis, the organizers from the ITU Copenhagen — one of a handful of research departments that seem to really "understand" game AI. From my seat on the organizing commitee, it was impressive watching Julian and Georgios pull off lots of great ideas, including the live stream for example, on top of a solid line-up. I also learned a few things for the next edition of our very own Paris Game AI Conference!
The State of Academic Research
On the first day of the conference this year, I gave a 1h30 tutorial about working with industry, and how best to approach collaboration. The talk included some observations about current trends in industry, to help researchers understand the big picture better and hopefully tackle better problems. I focused my talk on the academic side of things, because it'll take an inside job to fix the problems on the industry side.
The crux of my argument was really that you need to pick problems very carefully, to make sure not to compete or overlap with middleware developers, consultants, open source developers, indie game developers, or even stuff that's already being done in a satisfactory way in industry. Despite preparing for this presentation more than any other I've given, and getting more developer's opinions about it than any other, I was rather apprehensive about the whole thing beforehand.
From my analysis, I thought the whole situation was rather grim. Based on all papers from 2009 at both CIG and AIIDE, few researchers seemed interested in working on techniques aiming directly towards the games industry; those that were interested could use another iteration to find a breakthrough.
Since the talk, based on conversations and feedback from the at CIG 2010, I realized the following:
The part of academia that's working closely with industry is getting a much better idea of what commercial games require. The many more outlets for information are indeed proving themselves to be benefitial for the community as a whole, though there's still room for improvement in actually using the information that's available already!
Industry still does not put as much focus put onto AI than say graphics or animation, and I'd say there's insufficient interest. In the case of academic research, this translates into much less funding for game AI projects, compared to the variety of animation work that seems to have less trouble finding funding from industry sources.
The same goes for government funding as well, which points to a problem of notoriety of "game AI" accross the board. Multiple Ph.D. students and research groups (who are doing the more relevant work) are in a tricky situation. Either there aren't enough post-doctoral positions available in this area, or the funding is drying up due to the prestige of the area (as measured by journals).
It seem that industry is not particularly open to disruptive or even innovative game AI techniques that affect the design of games anyway. So the occasional ideas that come from academia that are already applicable are not necessarily taken on board... I had trouble justifying this to a local journalist, but that unfortunately seems to be the case!
Despite all this, my talk went down surprisingly well in the end. Most researchers thanked me for my honest perspective on things, and it was a surprisingly diplomatic presentation in the end :-) That said, even though I outlined many action steps for research groups to take, considering the bad news my tutorial carried implicitly, the whole picture remained grim. Someone said to me afterwards: "Thanks for the great talk... but it was really depressing!" There's obviously a lot to be done on both sides — and number #1 on the list must be advocacy!
Steve Rabin followed the next day with his keynote about the history of game AI, and tried to pick up the mood with the amazing opportunities in design. AI has been a driving factor for innovation, as shown by the many titles in the past. However, Steve came to the same conclusion I did in his last slides; the picture remains grim for game AI research in the near future. Steve thinks there'll be incremental improvements from certain studios in industry over the next years, but it'll take a while for a disruptive technology to come along. There's no sign of this at all...
Photo 3: Steve Rabin's keynote on the first day, delivering an inspiring and humorous look at the past of artificial intelligence in games, but an equally grim picture of the short-term future of game AI.
Bottom-Up Aesthetics vs. Top-Down Drama
In other news, the 2010 CIG conference answered the question of which famous researcher would win if they got into a fight; Espen Aarseth or Marc Cavazza. Espen won this particular battle and will no doubt turn out right in the near- to medium-future, but Marc's research is addressing interesting issues that are valuable for the long term. As he puts it: "We're inventing a new medium that nobody wants." Not yet, at least!
Many of us at the conference and watching the live stream felt Espen's argument was more convincing, as the case for the bottom-up approach in the short term is incredibly compelling:
- It takes little or no technology; we can do it today.
- As a simple emergent approach, it's indie friendly and low-budget.
- Focusing on the bottom-up benefits gameplay directly.
- It makes for more replayable experience with longer gameplay times.
- Little tricks can probably help remove boring moments without a planner.
Afterwards, Michael Mateas helped clarify a few points for me. From the narrative perspective, top-down is typically a story-driven approach whereas bottom-up is effectively character driven. If you look at it from the perspective of control system, top-down is more of a goal-driven planner where as bottom-up is characterized by more reactive techniques. (That confused me a bit!) So if you include some story-driven elements but encoded as reactive rules to help drive the narrative, then it seems to be both top-down and bottom-up — depending which perspective you're thinking about :-)
Overall, the competitions were extremely useful, and went as far as drawing out practical & focused approaches from many of the contestants! Maybe there's a little hacker within every academic researcher if you look deep enough? Except for a few assumptions in the competitions rules, the barriers between industry mostly disappeared during those contests, as the participants went through very similar problem solving processes than you do in industry. From my perspective it was also interesting to see the different tools chosen compared to what I would have used.
“Most contests should be run online before the conference to get statistically significant results.”
In practice, however, many of the new contests had glitches, and some of the older contests seem to struggle with recent changes as well. I think most contests that require human voting and feedback should be run online before the conference, so there are more statistically significant data points than a handful of academic researchers (deciding which level is more fun or which bot looks more realistic).
The Super Mario procedural level generation contest was designed to provide gameplay experiences customized to each and every player. The customizations were based on the results of playing one first fixed level, but the level chosen was extremely difficult, and by the time people got used to the controls they'd have died three times within 5s. Hardly enough information to base a customized level generation algorithm on. It turns out the data provided to the level generators was relatively coarse anyway, and it seems no entry even used the statistics to customize the levels at all. Also, confirming the noise in the evaluation method, the winning entry was a 6h hack from Ben Webber — though Peter Mawhorter's entry (his lab colleague) would probably have won hands down if it wasn't for a unicode file encoding bug that generated incredibly bizarre but playable patterns in the levels instead (think Mario levels generated by Salvatore Dali).
Photo 5: Julian Togelius (organizer) playing what should have been the winning entry, but ended up generating random and very difficult levels due to a text file encoding bug. In the end, the levels were surprisingly playable, extremely creative and probably the most rewarding of all levels generated despite the bug!
The 2K Bot Prize suffered a bit this year. There were fewer entries than previously, presumably due to the high barrier of entry compared to the Mario AI contest. Working with Java when most game AI code is C++ and having an outdated closed source engine doesn't help, but many developers seem to be focusing on the Starcraft competition later this year. A few factors also made the competition logistics sub-optimal:
The whole evaluation is a set of free-for-all deathmatch games, with bots and judges playing in a large level. This results in rather chaotic gameplay with little signs of tactical play.
The bots are primarily being judged on their motoric skills such as aiming/turning and moving. The movement seems to be heavily constrained by Unreal Tournament 2004's navigation implementation, and the turning still seems to be very robotic when spectatic from a first-person perspective.
The judging now happens at runtime within the game. You pick the link gun and aim for the player to tag, then LEFT click to mark as bot and RIGHT click to mark as a player. You only need to do this once, but judges found themselves doing this repeatedly.
Bots could judge, but were not built for it. So it was particularly easy to pick out the humans (only judges, who were judging) and the bots (who were playing normally).
One judge actively tried to act like a bot, but still scored higher than the best bot. This was the result of the meta-game of having judges compete against each other to identify bots & players.
All judges (effectively also players) were in the same game and even the same room, so it made things easier to evaluate implicitly if you were fighting against another judge... Chances are you'd get a subtle reaction from them or a pattern in the mouse and keyboard activity.
As I mentioned afterwards, the contest could be vastly improved by focusing on 1 vs. 1 matches with many more tactical opportunities (like Quake Live), having the human players under evaluation not be judges, and letting them make the decision of bots (or not) out of the game. The link gun may have some benefits for tagging realistic or unrealistic behaviors "online" but it could also be done more accurately by going over the recordings.
Photo 6: A judge (Gordon Calleja) playing against the bots in Unreal Tournament 2004. Gordon not only turned out to be the best judge, but was also trying to behave like a bot. Yet he still scored 38% "humanness", noticeably above the winning bot at 31%.
We're expecting the papers from CIG 2010 to be published online shortly, and when that happens I'll write some reviews of the best papers on the blog. In the meantime, check out the recorded sessions online on Vimeo. (My tutorial is there too...)
CIG 2010 was an interesting duality for me. While the conference itself was an amazing and stimulating event for me and many others there, my feeling after leaving were not the most optimistic ones. If this post seems less positive than usual, it's no coincidence! There really hasn't been much academic research & collaboration for game AI in the past, and things now are still in their infancy compared to other fields of video games. It doesn't help that few industry developers attend such conferences, and once they do (e.g. to give a talk) they don't return.
That said, things are on the right track! The competitions are a great way to encourage practical solutions from the community, even if I'd personally allow the AI to access all the information it wants. Also, there's much better information available to researchers so new projects are better informed. That's a big motivation for us at AiGameDev.com as well, and rest assured we're working hard on the other issues I brought up too!