Open Coverage
CIG10_MarioPlaytest.medium

CIG ’10: Computational Intelligence in Games 2010 Conference Report

Alex J. Champandard on August 29, 2010

Earlier this month, the IEEE Conference on Computational Intelligence in Games took place at the IT University in Copenhagen. Over 100 researchers from around the world showed up to present their very latest research. In between, there were inspiring keynotes from highly respected figures in the research community, with topics ranging from A-life and bottom-up aesthetics to top-down drama management. AiGameDev.com was there to cover the event, and I (Alex Champandard) gave a tutorial on the first day too.

The rest of this report includes some background about the conference (including things I learned from my first attendance to this event), the state of industrial-academic collaboration (and why many researchers were so depressed because of me), a report from the various competitions run at CIG 2010 (such as the 2K BotPrize and Mario AI), and finally further references so you can find out more about the research track and the papers presented.


Photo 1: The view just outside the ITU in Copenhagen, where the Computational Intelligence in Games conference 2010 was held.

Some Background...

CIG events have historically brought together researchers with a focus on game theory, evolutionary algorithms and neural networks, but now moving quickly into video games (e.g. 2D arcade classics or 3D action games), as well as diversifying the techniques in use. Current popular trends include monte carlo tree search and temporal difference learning. On the other side of the Atlantic, the AIIDE conference has historically focused more on the classical AI approach, such as logic reasoning and symbolic approaches.

“CIG has historically focused on game theory and evolutionary algorithms, whereas AIIDE focused more on classical AI approaches.”

Having followed the erruption of tensions at AIIDE on Twitter last year, from hearing feedback from previous attendees there, combined with last year's CIG report from Luke, I was a little apprehensive about the event; it was my first "big" academic conference about game AI. However, based on its success and the feedback, this year's CIG in Copenhagen will no doubt go down as one of the best so far, setting a very high bar for next year's event in Korea. The 2010 edition of CIG officially became a conference rather than a symposium, attracted the most papers with the lowest acceptance rate (higher quality papers), and further bridged the gap between the CIG and AIIDE community as well as with industry. I very much enjoyed those few days also!

The credit for this success goes to Julian Togelius and Georgios Yannakakis, the organizers from the ITU Copenhagen — one of a handful of research departments that seem to really "understand" game AI. From my seat on the organizing commitee, it was impressive watching Julian and Georgios pull off lots of great ideas, including the live stream for example, on top of a solid line-up. I also learned a few things for the next edition of our very own Paris Game AI Conference!

The State of Academic Research


Photo 2: Approximately half of the CIG 2010 attendees in the auditorium for the main track.

On the first day of the conference this year, I gave a 1h30 tutorial about working with industry, and how best to approach collaboration. The talk included some observations about current trends in industry, to help researchers understand the big picture better and hopefully tackle better problems. I focused my talk on the academic side of things, because it'll take an inside job to fix the problems on the industry side.

The crux of my argument was really that you need to pick problems very carefully, to make sure not to compete or overlap with middleware developers, consultants, open source developers, indie game developers, or even stuff that's already being done in a satisfactory way in industry. Despite preparing for this presentation more than any other I've given, and getting more developer's opinions about it than any other, I was rather apprehensive about the whole thing beforehand.

From my analysis, I thought the whole situation was rather grim. Based on all papers from 2009 at both CIG and AIIDE, few researchers seemed interested in working on techniques aiming directly towards the games industry; those that were interested could use another iteration to find a breakthrough.

Since the talk, based on conversations and feedback from the at CIG 2010, I realized the following:

  • The part of academia that's working closely with industry is getting a much better idea of what commercial games require. The many more outlets for information are indeed proving themselves to be benefitial for the community as a whole, though there's still room for improvement in actually using the information that's available already!

  • Industry still does not put as much focus put onto AI than say graphics or animation, and I'd say there's insufficient interest. In the case of academic research, this translates into much less funding for game AI projects, compared to the variety of animation work that seems to have less trouble finding funding from industry sources.

  • The same goes for government funding as well, which points to a problem of notoriety of "game AI" accross the board. Multiple Ph.D. students and research groups (who are doing the more relevant work) are in a tricky situation. Either there aren't enough post-doctoral positions available in this area, or the funding is drying up due to the prestige of the area (as measured by journals).

  • It seem that industry is not particularly open to disruptive or even innovative game AI techniques that affect the design of games anyway. So the occasional ideas that come from academia that are already applicable are not necessarily taken on board... I had trouble justifying this to a local journalist, but that unfortunately seems to be the case!

Despite all this, my talk went down surprisingly well in the end. Most researchers thanked me for my honest perspective on things, and it was a surprisingly diplomatic presentation in the end :-) That said, even though I outlined many action steps for research groups to take, considering the bad news my tutorial carried implicitly, the whole picture remained grim. Someone said to me afterwards: "Thanks for the great talk... but it was really depressing!" There's obviously a lot to be done on both sides — and number #1 on the list must be advocacy!

Steve Rabin followed the next day with his keynote about the history of game AI, and tried to pick up the mood with the amazing opportunities in design. AI has been a driving factor for innovation, as shown by the many titles in the past. However, Steve came to the same conclusion I did in his last slides; the picture remains grim for game AI research in the near future. Steve thinks there'll be incremental improvements from certain studios in industry over the next years, but it'll take a while for a disruptive technology to come along. There's no sign of this at all...


Photo 3: Steve Rabin's keynote on the first day, delivering an inspiring and humorous look at the past of artificial intelligence in games, but an equally grim picture of the short-term future of game AI.

Bottom-Up Aesthetics vs. Top-Down Drama

In other news, the 2010 CIG conference answered the question of which famous researcher would win if they got into a fight; Espen Aarseth or Marc Cavazza. Espen won this particular battle and will no doubt turn out right in the near- to medium-future, but Marc's research is addressing interesting issues that are valuable for the long term. As he puts it: "We're inventing a new medium that nobody wants." Not yet, at least!

Many of us at the conference and watching the live stream felt Espen's argument was more convincing, as the case for the bottom-up approach in the short term is incredibly compelling:

  • It takes little or no technology; we can do it today.
  • As a simple emergent approach, it's indie friendly and low-budget.
  • Focusing on the bottom-up benefits gameplay directly.
  • It makes for more replayable experience with longer gameplay times.
  • Little tricks can probably help remove boring moments without a planner.

Afterwards, Michael Mateas helped clarify a few points for me. From the narrative perspective, top-down is typically a story-driven approach whereas bottom-up is effectively character driven. If you look at it from the perspective of control system, top-down is more of a goal-driven planner where as bottom-up is characterized by more reactive techniques. (That confused me a bit!) So if you include some story-driven elements but encoded as reactive rules to help drive the narrative, then it seems to be both top-down and bottom-up — depending which perspective you're thinking about :-)

Competitions


Photo 4: One of the CIG attendees judging a level in the open-source clone of Mario.

Overall, the competitions were extremely useful, and went as far as drawing out practical & focused approaches from many of the contestants! Maybe there's a little hacker within every academic researcher if you look deep enough? Except for a few assumptions in the competitions rules, the barriers between industry mostly disappeared during those contests, as the participants went through very similar problem solving processes than you do in industry. From my perspective it was also interesting to see the different tools chosen compared to what I would have used.

“Most contests should be run online before the conference to get statistically significant results.”

In practice, however, many of the new contests had glitches, and some of the older contests seem to struggle with recent changes as well. I think most contests that require human voting and feedback should be run online before the conference, so there are more statistically significant data points than a handful of academic researchers (deciding which level is more fun or which bot looks more realistic).

The Super Mario procedural level generation contest was designed to provide gameplay experiences customized to each and every player. The customizations were based on the results of playing one first fixed level, but the level chosen was extremely difficult, and by the time people got used to the controls they'd have died three times within 5s. Hardly enough information to base a customized level generation algorithm on. It turns out the data provided to the level generators was relatively coarse anyway, and it seems no entry even used the statistics to customize the levels at all. Also, confirming the noise in the evaluation method, the winning entry was a 6h hack from Ben Webber — though Peter Mawhorter's entry (his lab colleague) would probably have won hands down if it wasn't for a unicode file encoding bug that generated incredibly bizarre but playable patterns in the levels instead (think Mario levels generated by Salvatore Dali).


Photo 5: Julian Togelius (organizer) playing what should have been the winning entry, but ended up generating random and very difficult levels due to a text file encoding bug. In the end, the levels were surprisingly playable, extremely creative and probably the most rewarding of all levels generated despite the bug!

The 2K Bot Prize suffered a bit this year. There were fewer entries than previously, presumably due to the high barrier of entry compared to the Mario AI contest. Working with Java when most game AI code is C++ and having an outdated closed source engine doesn't help, but many developers seem to be focusing on the Starcraft competition later this year. A few factors also made the competition logistics sub-optimal:

  1. The whole evaluation is a set of free-for-all deathmatch games, with bots and judges playing in a large level. This results in rather chaotic gameplay with little signs of tactical play.

  2. The bots are primarily being judged on their motoric skills such as aiming/turning and moving. The movement seems to be heavily constrained by Unreal Tournament 2004's navigation implementation, and the turning still seems to be very robotic when spectatic from a first-person perspective.

  3. The judging now happens at runtime within the game. You pick the link gun and aim for the player to tag, then LEFT click to mark as bot and RIGHT click to mark as a player. You only need to do this once, but judges found themselves doing this repeatedly.

  4. Bots could judge, but were not built for it. So it was particularly easy to pick out the humans (only judges, who were judging) and the bots (who were playing normally).

  5. One judge actively tried to act like a bot, but still scored higher than the best bot. This was the result of the meta-game of having judges compete against each other to identify bots & players.

  6. All judges (effectively also players) were in the same game and even the same room, so it made things easier to evaluate implicitly if you were fighting against another judge... Chances are you'd get a subtle reaction from them or a pattern in the mouse and keyboard activity.

As I mentioned afterwards, the contest could be vastly improved by focusing on 1 vs. 1 matches with many more tactical opportunities (like Quake Live), having the human players under evaluation not be judges, and letting them make the decision of bots (or not) out of the game. The link gun may have some benefits for tagging realistic or unrealistic behaviors "online" but it could also be done more accurately by going over the recordings.


Photo 6: A judge (Gordon Calleja) playing against the bots in Unreal Tournament 2004. Gordon not only turned out to be the best judge, but was also trying to behave like a bot. Yet he still scored 38% "humanness", noticeably above the winning bot at 31%.

Final Thoughts

We're expecting the papers from CIG 2010 to be published online shortly, and when that happens I'll write some reviews of the best papers on the blog. In the meantime, check out the recorded sessions online on Vimeo. (My tutorial is there too...)

CIG 2010 was an interesting duality for me. While the conference itself was an amazing and stimulating event for me and many others there, my feeling after leaving were not the most optimistic ones. If this post seems less positive than usual, it's no coincidence! There really hasn't been much academic research & collaboration for game AI in the past, and things now are still in their infancy compared to other fields of video games. It doesn't help that few industry developers attend such conferences, and once they do (e.g. to give a talk) they don't return.

That said, things are on the right track! The competitions are a great way to encourage practical solutions from the community, even if I'd personally allow the AI to access all the information it wants. Also, there's much better information available to researchers so new projects are better informed. That's a big motivation for us at AiGameDev.com as well, and rest assured we're working hard on the other issues I brought up too!


Photo 7: View from the top of the main auditorium during CIG 2010 in the IT University in Denmark.

Discussion 5 Comments

jcothran on August 31st, 2010

Hey Alex, Thanks for the conference coverage and summary. I'd agree that with the botprize contest, it would be better to separate judging(maybe as a invisible player) from tactical playing(how the game is normally played) - and guessing AI contestants probably still need to incorporate some player unpredictability,variety and playfulness(to pass for human) into their behaviors rather than have the bots always in a 'terminator' mode. I think it should be possible to have a bot that passes for human in these informal contests with a less gameplay experienced judging panel as I think they would be looking more for flaws that give the bot away as a bot rather than strengths of a good tactical player. The larger philosophical question for me is the monetization and patenting of designs and ideas - an idea or intelligence which is openly or freely developed or distributed is not really valuable(by itself) in the corporate business sense(though there's always hope in packaging and marketing - see bottled water). So I only expect to see stronger combined and reusable intelligences developed by corporations who are selling something other than the AI - some other technically or legally difficult to produce or distribute quantity such as the gameworld or artistist/narrative content itself. As of yet I still haven't seen evidence of any established game franchises providing any useful API's to their AI's or game datamining. It's understandable why they don't support this as nobody has shown a way to make money, reduce costs or improve quality by outsourcing their approach to AI. On a different note, found the following article about the composer David Cope an interesting read - he created computer programs that analyzed and reproduced musical works passably in the style of classical composers(a musical turing test), but the machine-production of passable 'art' opens the usual can of worms [url]http://www.miller-mccune.com/culture-society/triumph-of-the-cyborg-composer-8507/[/url] Cheers Jeremy

Zero37 on September 4th, 2010

Hi, Georgios and Julian have recently posted the CIG'10 conference proceedings online (link from the CIG Google Group): [URL="http://game.itu.dk/cig2010/proceedings/Start.html"]http://game.itu.dk/cig2010/proceedings/Start.html[/URL] Leo

michalb on October 27th, 2010

Just wanted to post some comments to BotPrize issues 1) + 2): 1) I agree that DeathMatch is not that much interesting. However - other gametypes are available - e.g. capture the flag in Pogamut is debugged pretty well, so I guess there won't be any problems to switch to it. 2) I agree that implementation of navigation in Pogamut was anything but not perfect. Good news is we have worked on these issues and we now provide better control over turning speed + we provide completely redone path finding and navigation (based on the Loque bot - BotPrize 2008 winner). Its a pity that from your comments it seems that noone tried to implement his own path finding and navigation. (or maybe he tried, but the results weren't good enough?) It is also a mystery for my, why noone used LoqueBot navigation algorithms - it was BotPrize 2008 winner and its navigation algorithms were much better then those originally provided with Pogamut + it was open source and downloadable on our pages. :-) From my experience when doing a good DeathMatch bot, the most important thing is the navigation, movement and path finding. Some fancy AI stuff or decision making is nice, but first the bots needs to move and shoot believably (and this is mostly about parametrization + fine tunning). Anyway, we have ported Pogamut ([URL="http://pogamut.cuni.cz"]http://pogamut.cuni.cz[/URL]) to UnrealEngine3 (UDK [URL="http://www.udk.com"]http://www.udk.com[/URL]), so there is a chance, I guess, the next BotPrize (if it will be) will be on UDK. :-) Best, Michal

alexjc on October 28th, 2010

Hi Michal, Thanks for your comment. [QUOTE=michalb;66565]1) I agree that DeathMatch is not that much interesting.[/QUOTE] No, that's not what I meant. I think free for all is a very bad choice. Even 1 vs 1 deathmatch would be better, and it's what I'd suggest starting with. [QUOTE=michalb;66565]Its a pity that from your comments it seems that noone tried to implement his own path finding and navigation.[/QUOTE] I've discussed this before in the forums here. The fact that UT2004 is closed source makes implementing a modern pathfinding system a real pain, and you probably can't do it. I admire your efforts with the new release of Pogamut, but it won't compare to Quake 3's approach, or anything you could do if you had access to the code -- for example integrating Recast. I'm certainly curious to try the UDK version of Pogamut! Alex

michalb on November 2nd, 2010

[QUOTE=alexjc;66569]I've discussed this before in the forums here. The fact that UT2004 is closed source makes implementing a modern pathfinding system a real pain, and you probably can't do it. [/QUOTE] There are two things. 1) Yes, you are certainly limited by the fact UT is closed source and you cannot add your favorite Navigation library directly to the engine. It is even problematic to compute NavMesh in UT (we tried that with some results, but they were not good enough to be usable). 2) The other thing is that you can have decently moving bot in UT just with information UT provides. There are tons of additional navigation parameters in UT that nobody uses. (e.g. radiuses around NavPoints, radiuses around navLinks, Reachability information, navigation flags - jumpspots, lifts, doors, etc.). With these information anyone can implement a decent navigation in UT. What I am trying to say is that navigation in UT could be much better in Pogamut or BotPrize, if someone would make the effort of using all the information (and we made the first step with new Pogamut-Loque navigation). And I even think that the navigation in UT as it is now accessible with Pogamut is NOT an issue when implementing a believable bot. So yes, you can't have Recast in UT, but still you can have decently moving bots. However, I understand that for someone it is not interesting to implement navigation with rather old NavPoint-like navigation approach in rather old UnrealEngine2. About UnrealEngine3 - the navigation system there, as far as i know now, hasn't changed much from UnrealEngine2 - there are still navigation points + links (and only this is currently accessible in PogamutUDK). There should be NavMesh accessible, but we haven't investigated that yet. But what is really great is that with [url]http://udn.epicgames.com/Three/DLLBind.html[/url] you can link you own dlls to UDK... I am looking forward to try it! :-) Best, Michal

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!