Editorial
lightbulb

What If Game AI Had Been Solved…

Alex J. Champandard on September 11, 2008

It’s been a while since I last wrote an editorial, but I personally feel this is one of the most important topics. In short, game AI in industry has really come of age recently in terms of technology, so it might be time to question our assumptions and see what else we can do to improve the field of game AI as a whole, as well as the community behind it.

The subtitle of the report sums it up:

“Looking beyond technology to
improve AI in games.”

Below is the PDF for you to download and share as you see fit! Also, if you’re an AiGameDev.com Insider, you can download the complementary White Paper Pack that goes with the report.

What If Game AI Had Been Solved?
Download PDF (331 Kb, 15 pages)

Feel free to post below if you have any comments about the premise of this report, or the various arguments I put forward. Also, if you’d like to share your opinion on what’s holding game AI back most, be sure to vote in this poll and join the discussion.

Discussion 17 Comments

runestone on September 12th, 2008

After reading that article, I think what alarmed me the most was that an industry person with no academic background could be placed in a position to review academic papers. On a brighter side, Game AI is emerging in the academic arena, and CRCs for this kind of research is becoming more common. There are even a few academic level conferences in Game AI. So, not all is lost to some misguided meritocracies who think it is all about a narrow set of trade practices. Given that human-computer interfacing is at the heart of AI, one can certainly expect Game AI to merge with traditional AI in the near future. After all, there is no greater encouragement for traditional AI schools to adopt Game AI than the sizable CRCs likely through Game AI research. For example, CRCs in game oriented work-flow management systems are rather big at the moment. Watch this space, because academics will soon redefine Game AI.

alexjc on September 12th, 2008

White papers are not necessarily academic. AIIDE prides itself on being practical and industry applicable too. The Program Committee has included a variety of people from industry for a few years now, but most of them have more background in research and/or academia than you would think at first glance. Anyway, I look forward to seeing what academics can come up with! Alex

zoombapup on September 12th, 2008

Actually, I see no problem at all with a developer reviewing academic papers, especially someone of Alex's background. Frankly, sometimes academics need this kind of industry feedback so that they dont go solving problems that arent really worth solving for an industry anyway. Fair enough if you are interested in more blue sky research then alls fair in love and war, but if you are saying that your research is applicable to the games field, you should at least listen to what game developers have to say. You might be surprised to find that lots of developers are also academics too, or have been previously (the owner of Mad Doc software comes to mind). Better to actually respect the input of both sides of the issue (commercial and academic) rather than assume that either position cant learn from the other.

Sergio on September 12th, 2008

When I first read this I was tempted to argue semantics. What do you mean by technology? What is Game AI anyway? But I won't. I think it's more interesting to talk about the practical side of this statement. I believe there are two additions that you could make. First, Game AI may be solved, if you consider only today's problems. If you went back a few years, developers might have said Game AI was solved because they had state machines and A*. That was more than enough then, but new game designs appeared that required new techniques, so technology kept evolving. I believe this trend will continue. We will face new, harder problems, and we'll develop new, more sophisticated, solutions. But I think the main problem that AI faces in games today is not that the technology is not there. Even in a game that enjoyed a perfect implementation of the state-of-the-art techniques you could find sub-par AI. And the reason is that most AI is not technology, it's content. In the same way that using the Unreal 3 engine doesn't guarantee pretty graphics without great art direction, technology is only the tip of the iceberg. Nearly everything that the player experiences is actually content: a character playing a sequence of animations, proper transitions between behaviours, manually scripted conditions to account for possible combat scenarios, dialogue that triggers at the right time, etc. A character will look richer and more intelligent if it has an extra low-level, automatic behaviour, than if you make the high-level AI smarter. This is hard for us, AI coders, to hear, and admittedly it's not true in every case. But it's the main reason studios don't go crazy with new technology. The AI for them is actually in the content, not in the navigation algorithm. Games that require truly procedural, sophisticated AI are rare. Designs that are mostly based in AI are risky and the technology for those is definitely not solved. So instead, we have AI coders carefully adding content to create interesting characters, based on fairly simple technology.

zoombapup on September 12th, 2008

I'm not sure the technology we are using to produce deeper and deeper characters with more "life" in them is actually that simple. Certainly a lot of the research I'm reading from guys like Michael Gleicher in animation is very much at the boundary of what we are looking at for games right now. I think its actually in an embryonic state still, the field of AI that deals not with the basic behavior, but more with emotional modelling and "character" that touches on behavior, animation, even things like improvisation and linguistics. So we may have "solved" the basic execution model, but we've got a ton of new challenges to face still. Frankly, we havent really started to address the issues even from a purely design point of view. What kind of games are technically possible if we take an execution model for granted? Then on top of that, how do we convince execs that there is a compelling reason to actually do this work? I mean, if someone can count "realistic hair" as a USP, why couldnt they count "realistic depression and elation" as a USP? Maybe we need to train more execs on the potential space??

alexjc on September 12th, 2008

The way I look at it is that there are a finite number of ways to build the logic for a character that reacts sensibly to its environment. Now game AI is already using the best ideas from robotics and virtual agents, we're doing a pretty good job -- and I think any further progress we make from now on will have diminishing returns. Granted, you can put these characters into different environments, in control of different bodies/systems, and give them different roles & goals, etc. but the process of creating them will be very similar in each case -- and extremely focused on content. As I said in the report, Game AI will always be about special cases so there's no working around that one! [B]Sergio:[/B] The importance of content in a way also emphasizes the fact that pure technology is taking the back stage increasingly, which is the essence of the report. But that reminds me I need to pencil you in for a panel on asset pipelines, data workflows, etc :-) [B]Phil: [/B]You're definitely right about emotions, behavior, animations, etc. But I'd be interested to hear if you think that we'll be solving this in any other way than by creating a multitude of special cases. The technology might help reduce our overheads, but I doubt there'll ever be a simple elegant algorithm for modeling human quirks! To illustrate my point, this is what they're doing with [URL="http://kotaku.com/5047562/a-behind-the-scenes-look-at-sims-3"]The Sims 3[/URL], and to me that confirms that you can do anything with a big budget :-) Anyway, thank you both for your great comments. It's great food for thought! Alex

zoombapup on September 12th, 2008

You know, watching that sims movie, it makes me feel quite nauseated the way everything is so "EA" :) I dont know if its just that I've got a european/japanese view of aesthetics, but I just hate the styling of the sims so much (and it gets worse as they make it bigger), although I love the game concept. But its definitely got some interesting ideas, specifically the personality traits thing, although I dont know if selecting from a palette of traits is really the best option. It seems like most of it is really only to do selection of animations. The kind of thing I'm interested in is far more procedural than that, it kind of involves the stuff like Ken Perlin is doing, where the personality traits feed the procedural animation engine parameters (shoulders slouching more if a character is depressed). I wonder if there is a source the encodes all of these visual cues to personality somewhere. Might have to go and google for it :)

Sergio on September 12th, 2008

[QUOTE=alexjc;4795]The importance of content in a way also emphasizes the fact that pure technology is taking the back stage increasingly, which is the essence of the report.[/QUOTE] True, my point is that technology is only a small part of what Game AI is. You mention in your report that Game AI is pragmatic, that we'll ignore techniques that are not necessary because we can solve problem in different ways. Those different ways are realized through the content, so if we're switching approaches, it's only fair to consider both as part of our work. The focus is moving from runtime algorithms to development tools, and finding ways to create content faster and with greater quality. Those problems are not solved, and they greatly influence the way we do AI for games.

alexjc on September 12th, 2008

Sergio, Great point about the focus moving onto tools and production. I agree there's a huge amount of work to be done there, for instance allowing us to quickly build something equivalent to The Sims 3 according to Phil's odd British tastes :-) That said, we could do all of this already but it'd take huge budgets and lots of manual labor... so I'd consider the next steps more about optimizing and streamlining more than "solving" -- which would imply it's currently not doable. (Semantics I guess!) But I think this realization, regardless of where you draw the line, is very important because it'll help shift our focus on to things that are increasingly important for the community as well as individual developers. Alex

runestone on September 13th, 2008

[QUOTE=zoombapup;4776]Actually, I see no problem at all with a developer reviewing academic papers, especially someone of Alex's background. [/QUOTE] Alex has done some research, which I was not aware of in my original post. On non-academic industry parties reviewing academic papers: My sub-point with that comment was about the quality of papers. Take for example "Game AI Wisdom", where about 80% of the papers in this book are ofuscated by poor writing styles and unconventional terms. Yet, these are industry quality papers that get published. This area of AI cannot grow without good communications and this is not what we see at the moment.

zoombapup on September 13th, 2008

Poor writing styles, I can definitely agree with (my own included), but I dont think I've seen any unconventional terms in them that I recall. Perhaps not academic terms, but certainly not terms I've never encountered within the industry. The biggest issue is that someone in industry really doesn't have that much background in writing papers (at least not very often), so there is usually no specific style of paper. But its the knowledge expressed that counts, not the writing style and I don't see any particular problems understanding the knowledge expressed in the articles. I could counter with many academic papers I've read which are really poorly written and which actually contain large holes in the knowledge being expressed (especially with respect to implementation details). I agree it would be nice if both sides came together and we got academic quality papers with actual implementation details within, but I doubt thats going to happen anytime soon.

runestone on September 13th, 2008

[QUOTE=zoombapup;4808]Poor writing styles, I can definitely agree with (my own included), but I dont think I've seen any unconventional terms in them that I recall. Perhaps not academic terms, but certainly not terms I've never encountered within the industry. … snip I agree it would be nice if both sides came together and we got academic quality papers with actual implementation details within, but I doubt thats going to happen anytime soon.[/QUOTE] Implementation details in a popular programming language are nebulous; moreover, we avoid such features because they will date a paper and will only interest a select audience. A paper is far more valuable if it can convey information to many readers, including those who do not program in, say, C++. We have formal mathematical statements, clearly labelled diagrams and well-established nomenclatures in academic papers, so that anyone at any time can read and understand these works. I mean, what is programming? Pseudo code is a program; a mathematical statement is a program; an unambiguous and complete description of a process in English is a program. So, implementation details are abundant. Moreover, academic works sit firmly on the mathematical classifications of problems. Therefore, many statements are complete by simply referring to some problem classifications. However, I do digress here, as there are some bad academic papers, too.

William on September 13th, 2008

[QUOTE=alexjc;4735]There isn't anything unsolvable in game AI. Some things are still a little inefficient to build, but all it takes is a bit of time...[/QUOTE] From a theoretical perspective, you might be right (although the fact that academic AI still struggles with Go and Poker suggests that any game world more complex than 19x19 squares or a few decks of cards and five NPCs contains a good share of hard to solve problems - we have yet to address these in video games). From an engineering perspective, there is still a long way to go. Aside from the development lead time issue, there are other issues. Although we have a list of documented point solutions (for example, the white paper pack), we lack knowledge and guidelines to: - decide when (not) to apply a point solution (few of the white papers do a good job of explaining when and where the technique falls short or a less complex technique is preferred); - integrate multiple point solutions (sometimes the techniques have conflicting requirements or require yet another way to represent the game world or actions). Two examples to make this more concrete: "Near-Optimal Hierarchical Pathfinding (HPA*)" (from the white-paper pack) is a really nice paper. The main claim "Compared to a highly-optimized A*, HPA* is shown to be up to 10 times faster, while finding paths that are within 1% of optimal" is impressive, but the paper does not: - indicate under which conditions sub-optimal paths are found - perform tests on 3D terrain with plenty of vertical and one-way links. If I, based on this paper, would commit to improve the game's pathfinding with HPA* in about one week, I might run into surprises, especially if my game has many vertical and one-way links. (The algorithm happens to be sub-optimal when there is a shortest-path between two nodes in a cluster going outside the cluster; this is likely to occur when a cluster contains a pair of nodes n1, n2 where cost(n1, n2) >> cost(n2, n1) because of a one-way link or a ladder. To avoid this, clusters containing these kind of link require further splitting). Integration-wise, when a path-finding mechanism imposes its own restrictions of what can be and what cannot be an area, it may end up being incompatible with other needs for areas (AI reasoning about terrain, designers instructing the NPCs to act within a specific area only, streaming). The obvious work-around is to implement multiple area concepts side-by-side, but this brings additional costs, may require extensions to the level editor, etc.. [QUOTE=runestone;4810]Implementation details in a popular programming language are nebulous; moreover, we avoid such features because they will date a paper and will only interest a select audience.[/QUOTE] I'm not sure we really need a popular programming language. However, a paper describing an algorithm should at least discuss time/space complexity and provide measurements for representative problems as reference, otherwise the paper is mostly worthless to the game industry. A few of the AIIDE submitted academic papers I reviewed this year were lacking this.

alexjc on September 13th, 2008

Thanks William. You just reminded again how useful it would be to have a bunch of white papers re-implemented so they can easily be tried out! Figuring out the engineering aspect, also arguably only takes time. It's not particularly challenging, but it always takes longer than you expect... Alex

runestone on September 14th, 2008

[QUOTE=William;4813]However, a paper describing an algorithm should at least discuss time/space complexity and provide measurements for representative problems as reference, otherwise the paper is mostly worthless to the game industry. A few of the AIIDE submitted academic papers I reviewed this year were lacking this.[/QUOTE] Do you mean time/space complexity of decision problems (NP, P, NP-Hard, P-Space) or do you mean the running cost of algorithms (Big-O, etc.)? That it would be good if papers show the running costs of their algorithms. An algorithm might be O(n^2) on a deterministic Turing machine but O(1) on a non-deterministic Turing machine. The industry is already looking at parallel processing... Which do you think is more important in the case of massive parallel processing, the complexity class associated with a given problem or the running time of an algorithm that can solve the problem (I mean, in a paper)?

William on September 14th, 2008

[QUOTE=runestone;4818]Which do you think is more important in the case of massive parallel processing, the complexity class associated with a given problem or the running time of an algorithm that can solve the problem (I mean, in a paper)?[/QUOTE] I'd appreciate seeing the running time (or space) of an algorithm and how this scales (or doesn't) with hardware, since that enables the reader to judge whether the proposal is a significant improvement or not (compared to 'in-house' implementations or other published solutions). The complexity class of the problem should already be clear to the reader.

gwaredd on September 14th, 2008

Far from having game AI solved, I believe we are on the cusp of opening up some truly interesting and exciting problems. Sure, we have the basic perfunctory problems down. There are areas we can improve on of course and technical obstacles to overcome in implementation (particularly when things scale up). But we are far from cracking believable and compelling NPC's in a RPG game or solving the interactive narrative problem. If we want better games we need more imaginative problems than path finding. An NPC in a game could well be the first program to pass the turin test - and lets face it, I've met plenty of real players online who would fail ;) Commercial pressures means the industy is not a suitable vehicle for open ended research. So pushing the boundaries must come from outside. Unlike graphics though, academic AI research has been largely irrelevant for us. This may change but I wont hold my breath. But then it doesn't really need to. For the tougher questions we need to look towards sociology, psychology, narratology and other disciplines for inspiration. When it comes to codifying and implementing particular models there are much better programmers in industry.

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!