The best games adapt to the player to provide an entertaining experience. But handling adaptation in immersive 3D worlds isn’t trivial! In terms of AI, the challenge is to build a system that:
Requires little work to script each possible adaptation,
Allows you to control the outcome to prevent unrealistic situations.
The games industry has been struggling with this for a while now, with varying degrees of success. But one solution shows more promise than any other…
The State of Learning in Games
Developers have been able to “fake” adaptation in games using a variety of techniques. These include:
Using static logic which relies on runtime statistics. Call this indirect adaptation.
Having multiple hand-edited scripts for all the major scenarios anticipated by the designers.
The in-game results are rather convincing, but these traditional approaches suffer from two major problems: it takes a lot of time to work out and implement the possible adaptations, and the behaviors are limited to the cases handled by the developers.
The Academic Perspective
These days, developers with AI backgrounds apply machine learning (ML) techniques to specific problems almost painlessly. Thanks to the application of data mining, the field has matured and found a place in mainstream software.
How best deal with the random exploration necessary to learn?
It’s already possible to use neural networks or decision trees to find solutions to most batch-learning problems. The catch lies with incremental learning, as it’s hard to guarantee consistent behavior while the learning is ongoing. The order in which the data is learned affects stability, but fundamentally, unpredictable changes in the policy are required for most ML algorithms to explore the possible solutions.
Realistic Adaptation for Games
To make adaptive games any better than scripts and statistics, you’ll need a form of online learning. So the challenge is finding an architecture that integrates ML to adapt to unforeseen situations, yet never causes the AI to behave unrealistically while learning.
You can think of your AI along two dimensions:
Realism should be controlled entirely by the designers during development.
Intelligence can be varied dynamically at runtime based on the learning algorithm.
In effect, the secret is to let the designers build a sandbox for the behaviors, then let the machine learning take control within those limits.
A Framework for Compromise
“A promising solution is to combine a planner with heuristic learning algorithms.”
Using a planner, the underlying logic generates context-sensitive behaviors that always have a minimum level of realism. Then, machine learning algorithms guide the planner heuristically to generate solutions suited to the current gameplay experience (e.g. more or less intelligent).
Of course, this relies on you having a planner that’s easy to use in game development. You don’t need a fully blown planner to do this; a traditional behavior tree will do the trick. Intuitively, the sequences and decorators in the tree constrain the behavior to its sandbox, and the selectors give freedom to the learning algorithm to pick from behaviors.
How would you approach the problem of learning realistically?