A Critique of Case-Based Reasoning for Games

Alex J. Champandard on October 17, 2008

Generally speaking, I’m very impressed with case-based reasoning (CBR) and the research that’s being done applying it to games. It’s arguably one of the most promising avenues of academic research I’ve seen in a while.

In particular, there are a few key issues that CBR tackles head on that make it interesting for the future of game development:

  • It focuses heavily on workflows and methodology for developing AI.

  • It tries to remedy the content bottleneck at the behavioral level.

  • It emphasizes human assistance not just raw AI technology.

All three of these topics are critical to game AI, so from my perspective I’d say that CBR is on the right track. However, as you’ll see shortly, some of its core principles are questionable when applied to AI in games…

Note: I’m writing this editorial mainly to encourage research to address these problems, even if it involves adapting the useful ideas of CBR into a different kind of technology altogether. So if you have any comments or ideas feel free to post them below!

What is CBR?

Case-Based Reasoning works by combining knowledge acquired from experts during development, expressed as a database of special cases, with automated reasoning at runtime to find the most applicable example from the expert and apply it to the current situation.

Figure 1: Overview of the Darmok system, which applies case-based planning to the strategy game Wargus. (See paper below.)

Generally speaking the workflow can be summarized as follows:

  1. Let an the expert play a few games in the same conditions as the AI would.

  2. Monitor and track all the actions of the player and log them for later.

  3. Annotate each of the actions in terms of the expert’s strategy.

  4. Store each of these cases in a database that can be used at runtime.

  5. Use a nearest-neighbor algorithm to retrieve the best case for each situation.

The algorithm works when there are sufficient cases, since it can figure out that “the expert did this in a similar situation, so I’ll try this now to achieve the same objective.” So far, so good. But where does CBR start to struggle when applied to game AI?

I will assume you are familiar with the anatomy of a game AI architecture for characters, which I wrote about previously on this blog. The article describes the different kinds of problems that are common in game AI, ranging from highly constrained decisions to almost unrestricted choices.

Precise Sequential Control

The first thing to note is that very constrained problems — either because of design requirements or asset restrictions — require accurate control over the behaviors. In the games industry, that’s typically done by sequences of actions (akin to small scripts) which handle special cases very well. Most often, you’ll find such sequences at the level just above animation control (e.g. move into cover first, then crouch down) or to deal with story element or level goals in the correct order (e.g. play line of dialog, then attack).

Modern CBR techniques go through a lot of trouble to capture these sequences from the expert annotations. I’m using the following paper as a reference, which basically works out the dependencies of actions with a little help from the expert’s annotations:

On-Line Case-Based Plan Adaptation for Real-Time Strategy Games
N Sugandh, S Ontañón, A Ram
23rd AAAI Conference on Artificial Intelligence (AAAI-08)
Download PDF

The down side of CBR is that even with the hard work to generate dependency graphs, there are no guarantees that the system has figured out specific sequences that are required for the game AI to function correctly when the decisions are highly constrained. As a result, it’s left to either QA or the expert himself to go in after the CBR analysis has run and check that the correct sequence was induced correctly. You could add automated tests for this also, but at this stage, you’ve reached a point where you’re wasting your time compared to explicitly writing down these sequences with a good editor.

In fact, this is the crux of the argument: if you have a good scripting language, or even a visual tree editor to capture sequences, you’ll be orders of magnitude more productive (and more reliable) than an expert trying indirectly to get the system to induce specific sequences from examples. As such, it’s fair to claim that CBR isn’t particularly well suited to these kinds of problems in game AI.

Learning Decisions from Examples

I’m sure you’re thinking, fair enough, CBR can’t effectively handle situations that require tight control, but surely there are plenty of other situations that can benefit from this approach. Actually, what’s left are the cases where decisions are less constrained. For instance, in the case of a FPS: which cover point you should pick based on the combat situation, or in an RTS: what type of defensive units to build to prevent an enemy intrusion. These types of decisions are much more open because either choice will result in a reasonable behavior, although possibly not the most competitive one.

“Expert examples take time to acquire, annotate and process.”

The problem with case-based reasoning on this level is that it relies on expert examples, and these take time to acquire, process and annotate. So knowing how game developers are pressed for time, any machine learning algorithm will struggle here because it doesn’t have many examples to work with.

The solution would be to use the huge quantities of data available from local playtests or even online beta games. This has the advantage of scaling very well, since it does not require an expert to annotate the samples. This also has the advantages of making statistical machine learning possible, which are much more likely to be robust to oddities in the data and deal with many more situations than the expert could cover on his own.

Screenshot 2: Strategy games like Wargus, despite their simple rules, make for great research prototypes as the complexity increases very quickly with large maps.

Curse of Dimensionality

Another more technical flaw of the CBR approach is that the underlying nearest neighbor algorithm for selecting cases that are similar to the current situation doesn’t scale very well. As you should know, as the number of dimensions of your problem space increases, the distance between two random points will tend towards a constant number. In practice, this means that you’re basically going to have a hard time finding a “similar” case for any given situation, particularly if it takes hundreds of factors to describe each situation. If there are dozens of similar cases in the database with a handful of different factors, which is closest?

All machine learning techniques, and even AI as a whole, suffers from dimensionality problems. But since CBR has limited samples to work with due to the expert bottleneck, there’s not much it can do to resolve these issues. On the other hand, approaches based on large quantities of data processed by statistical machine learning will have more data to help make better decisions, although it still remains a hard problem…


Case-based reasoning is tempting since it provides an elegant uniform approach to many problems in game AI. However, this “jack of all trades, master of none” approach is ultimately its downfall. For highly constrained problems, expert annotations and implicit sequences cannot compete against specific editors in the hands of designers who can create explicit sequences with little hassle. On the other hand, for other problems the decisions are less constrained, CBR has the disadvantage of using a form of machine learning based on few expert examples, which doesn’t scale well in terms of complexity or development time — compared to machine learning techniques that leverage the abundance of data that’s increasingly common these days.

There are a lot of lessons to be learned from the approach that CBR takes towards game AI, but ultimately I feel that the kind of changes required to make it useful in production would make the technology different enough for it to be called something else but “case-based reasoning.”

Further Reading

Real-Time Plan Adaptation for Case-Based Planning in Real-Time Strategy Games
N Sugandh, S Ontañón, A Ram
9th European Conference on Case-Based Reasoning (ECCBR-08)
Download PDF
Situation Assessment for Plan Retrieval in Real-Time Strategy Games
K Mishra, S Ontañón, A Ram
9th European Conference on Case-Based Reasoning (ECCBR-08)
Download PDF

Discussion 5 Comments

meshula on October 18th, 2008

On the topic of collecting expert knowledge, it is certainly onerous to collect the data. However, if your organization has a QA department, they can be a great help. For several games that had an AI that learned from watching humans, I wrote instrumentation to record play sessions along with contextual information so that I could identify the situation at each time sample and correlate that with the decisions made. When you multiply the size of the department times the number of months in test, you've got an incredible data stream. In analyzing the data, I was able to categorize good decisions and bad decisions (by looking ahead at the player's outcomes); this let me put a "skill" knob on the AI.

alexjc on October 18th, 2008

Thanks for dropping by Nick! I think your suggestions are not only cool but very practical! It's exactly the kind of thing I had in mind, but it's great to see that it's a viable solution. How long did it take to establish such a system? Are you allowed to say what game it was? :-) My argument, though, is that this is arguably no longer CBR. It's probably much closer to learning behavior from examples, and reinforcement learning to calculate the value of an individual action based on the outcome. If these are things we need to do to achieve the goals of CBR (instead of relying on the traditional CBR approach) then so be it... Alex

shivanw on October 19th, 2008

Did your analysis of case-based reasoning in games only look at this one domain, Wargus, or did you examine the recent work done by other research groups? Some of the more recent work includes using reinforcement learning in a real-time combat game (Molineaux and Aha 2008), automatically imitating the behavior of a soccer player without the need for expert knowledge (Floyd, Esfandiari, Lam 2008 & Floyd, Davoust, Esfandiari 2008) and learning to play the game of Tetris (Romdhane and Lamontagne at ECCBR 2008 and FLAIRS 2008). I think some of those works, at least in part, address some of your concerns with case-based reasoning.

alexjc on October 19th, 2008

I looked at CBR as it's traditionally defined. The limitations are not particularly dependent on implementation, as they're a conceptual thing -- you'll always have trouble getting around them unless you use different forms of technology. I will take a look at the papers you mention (I think I reviewed one previously), but the fact that they explicitly move towards reinforcement learning is yet another confirmation that CBR struggles on its own to solve the cases where lots of data is available. At what stage does a technique no longer earn the title of "case-based reasoning"? Alex

jamesford42 on October 20th, 2008

From hearing Jeff Orkin talk about the restaurant game last year, I would say it also fits into this area of research and is worth mentioning. In particular I recall being fascinated by the idea that a game could ship day one with no ai but a system in place for recording player actions in different situations, then in a few months ship fully featured ai with little extra work. [I am referring to a primarily multiplayer game in which the ai-bots are emulating normal players.]

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!