The Apply AI Innovations 2007 conference drew together programmers from the games industry, students and professors from academia. The round table featured about fifteen participants, represented primarily by programmers from Realtime Worlds and Rare, and researchers from Imperial College in London.
All participants contributed their thoughts and insights at some stage (a great advantage of smaller groups), which proved very constructive. In fact, the discussions throughout the day were very interesting also. Not to mention the reception afterwards!
‘Arty’ photo of the sponsored reception.
How do you handle replanning in a sensible way? This was discussed in the context of my talk earlier, but this turned out to be some very good generic advice:
Work out what assumptions each part of the plan relies on. Only replan when those assumptions are broken while executing the plan.
Identify key opportunities that could improve the plan, and replan when these opportunities arise during execution.
While the original question was about planners, pretty quickly we ended up talking about oscillations in behaviors generally — which are a problem just as much for reactive state machines. Some tricks that worked quite well for me were:
Use a time-lock to prevent behaviors being re-executed for n seconds.
Make sure to use mostly soft conditions, i.e. only for deciding what to do next and not for triggering a change in behavior.
So, for example, in the context of attack ranges in action games, only the inner radius would be a hard condition; an enemy in that radius would immediately run back and shoot. The other attack ranges would be used simply to decide whether to advance or take cover… This way, the middle radius does not cause oscillation, just helps with context-sensitive combat decisions.
Using Learning AI
There was a consensus over techniques from machine learning (ML) such as neural networks (NN), or genetic algorithms.
Online use is limited as incremental learning (back-propagation) is unpredictable. Previously learnt information is typically forgotten.
NN with multiple layers are black boxes, for all extent and purposes. Spending time to understand, or manually design these is just not worth it. (Although Steve Grand made that work for Creatures.)
Neuro-evolution was mentioned multiple times with good feedback, but still doesn’t resolve the problems with ML for games.
Some solutions, however, were presented:
Single layer perceptrons are most feasible for using in games as they are very intuitive. Black & White uses some these in some form for learning desires.
There are automated techniques for converting neural networks to fuzzy rules or state machines, but without any annotations, these other representations are still relatively hard to understand.
It’s generally important in design to consider a learning technique as a random controller. So you should design your architecture such that learnt behaviors are intelligent and believable no matter what the NN decides.
Keep the machine learning components very small, and very isolated. NN are good for estimating simple continuous floating point numbers.
If you must do learning within the game, make sure to store the samples and do batch learning. This is much more predictable.
This lead the discussion into debugging…
The question was asked in terms of planning: how do you debug a planner? However, this is also good general advice:
Make sure your core algorithm is unit tested. Nobody would implement a new programming language without 100% test coverage these days, and your AI engine is no different.
Store the latest plans in memory, in a log file locally, or over the network. Make it easy to attach these to test reports.
If possible, keep a copy of the state used by the planner to make its decisions, and use it in the same way.
A talk earlier in the day.
AI During Development
A researcher at Imperial College is looking into AI to help generate virtual worlds given some examples. The idea is for a whole hotel to be created procedurally from sample rooms, copying the style given by a level designer.
The participants discussed the potential for using AI to help create levels in action games. Bots could be used to simulate the player, and the emotions of the bots could be used as a way to provide feedback for the level designers. (Very much like Sim Golf works, and forces you to design better courses for the AI golfers.)
Most participants were using waypoint graphs for navigation, and Kynapse’s automatic “path-data” generation was being used successfully by participants.
We agreed that navmeshes, however, provide better information but they can be harder to create. Getting artists to build these is a good option, but generating them from physics bounds is also an option. (I mentioned this was done at my last company very successfully.)
PathEngine is apparently working on a tool to generate the navmeshes automatically from more complex physics geometry, which everyone was excited about. It is a very tough thing to achieve, however, so it might be a while before it’s reliable.