Open Report
MARMELADE_NavigationDebug.medium

Global Jam Report: On Little Old Cleaning Ladies in Stealth Games

Alex J. Champandard on February 4, 2010

Last weekend, the Global Game Jam was held worldwide and we took the opportunity to turn it into an AI Marmalade. (For those of you that still don't get the breakfast joke: jam, marmalade. See what I did there?) Part of the AiGameDev.com "team" of contributors was available on Saturday and Sunday — Radu, Richard, and partly Nick — and I (Alex Champandard) made some changes in my usual schedule for the weekend, and we built a stealth game!

As a base for the implementation, we decided to use our AI Sandbox as a framework and build this game on top. The goal of the AI Sandbox, beyond just demonstrating AI algorithms and techniques, is to be a good prototyping environment for gameplay & AI — so we figured this would be a good stress test! (The AI Sandbox is currently only available to Premium members, but there's now an official site you can visit!)

NOTE: Parts of this article were written by Radu Septimiu Cristea, who was the main driving force behind the coding this weekend. I'll post a video of the game after the next release of the AI Sandbox, which will include the source code.

Screenshot 1: A screenshot from the final game. The player is in white, and guards in red. (Click to enlarge.)

Design

The idea to make a stealth game made the most sense for a few reasons:

  1. Such games often exhibit fascinating AI & interesting gameplay dynamics.

  2. They don't necessarily require you to implement an entire combat system!

The design came together in discussions on Friday (and a few before). I described the basic idea in our project log in the forums:

“You play a character that's infiltrating a premise: indoor building, or outside location. Your goal is to try to get to one or more point in the building to pick up / drop off something.

There are guards on patrol, and their behavior is mostly predictable. These guards move relatively slowly and don't see too far. If they get suspicious they approach the last known position, look around and repeat.

An old cleaning lady is also on the premises, she's pretty smart. If she sees something, she'll keep looking and do an exhaustive and methodical search — albeit very slowly. She's there to keep the pressure on.”

The idea of the little old cleaning lady was the best received on Twitter — clearly one of the best modern tools for early design feedback! The cleaner provides a good way to keep the pressure on as a gameplay mechanic, but she’s a good comedy addition to the game too. In fact, we got quite a few suggestions from other Jammers on Twitter for what the old cleaning lady could do.

Anyway, That design was good enough to get started…

Preparations

Screenshot 2: The reports of our build servers, as we stabilized the repository post-merge. (Click to widen.)

Reusing an existing framework seems like taking shortcuts for a Game Jam, but in our case it took the best part of Saturday morning to get to our starting position. In particular, we wanted to merge some experimental branches into the trunk that we’d been working on in our SVN repository:

  • An improved skeleton that Richard Fredriksson has been working on. The new skeleton is designed to be much faster and simpler to mirror at runtime, and comes with its own Rig in our animation tools.
  • The 2D grid for pathfinding that Nick Samarin had implemented as a dynamic alternative to the waypoint graph, similar to UNCHARTED 2.

It paid off in the end, however. We managed to reuse everything, including the A* pathfinding, the navigation, the locomotion, and the basic character animation to build a real 3D game. Also, since Richard had the character Rig, we managed to make some new walk animations when we needed them. Nick’s code was intended to be the base of the occupancy grid that the cleaning lady would use to hunt you down, but unfortunately we didn’t get around to implementing that…

From then on, we could start work on the behaviors.

NPC Behaviors, by Radu

Once the preparation were over, the starting point of our game was a skeleton of the Hide & Seek demo. After removing the code that was not needed, the first order of business was creating two brain components for the guard and cleaning lady. It soon became apparent that this is going to be the perfect time to stress test out the behavior tree framework and the helper functionality that came with it, namely the tree builder.

Where the guards BT was somewhat similar to the seekers in the Hide & Seek demo, the star of the show was the cleaning lady’s BT. It consisted of three main parts. The idle part — represented by the cleaning around behavior; the suspicious part, which called for movement in the direction of the most likely player position and finally the combat part that made the cleaning lady call the guards on the player’s position. Careful monitoring insured a reactive BT that would reliably switch from one part to another, depending on the world state. This was insured by the monitor and check node decorators that used atomic conditional actions that did the actual state checks. Being under time pressure I didn’t fully use the modules that the framework provides and some code duplication issues are visible. Fortunately, I plan to refactor the code into reusable modules that use the build-in-place construction technique. This would especially help out with the game state monitoring nodes.

Screenshot 3: A test level that Radu hand-designed, along with the cleaning lady in yellow who just alerted the guard in red. (Click to enlarge.)

Another problem that we had to solve was one of “communication” between the level dwellers. The cleaning lady notified the guards of the players’ intrusion by setting a flag in its brain blackboard and subsequently a controller would read the flag and dispatch that message to the guards. I found this technique really lacking and the fact that it would not scale very well pretty obvious. The use of an event system would be really well suited to solve this problem of inter-actor notification.

The recently added reasoning layer (as described in the masterclass with Damian Isla) was adapted to handle IsInView interrogations from the BT conditional actions. As we have seen in our previous implementation of this reasoning layer, this proves to be a very powerful technique greatly simplifying the actual BT actions that just have to pull data from this layer. The layer made heavy use of the underlying Query – Job Processor, constantly sending LOS queries to the sensory system. At this point it was obvious that the LOS queries and inherent querying mechanism was not flexible enough, having to hardcode the field of view in the LOS job.

Screenshot 4: Four guards following the player while moving around, with the debug rendering for the navigation enabled. In red pixelated sprites you see the patrol routes and the last known player position. (Click to enlarge.)

One thing that inevitably came up was the navigation system. So far, it was only used for situations where the targets were static and the AI planned ahead until its next destination. When used on dynamic targets, there were problems with starting and stopping every time the target changed. Radu mentioned these concerns:

“One concern was the navigation system, having to handle a constantly changing destination point and keep the motion fluent enough to make chasing the player exciting and engaging. This concern was laid to rest sometime Sunday afternoon when I got the first glimpse of the walk animation from Rikki and improvements in the navigation system from Alex.”

Supporting dynamic targets in a navigation system is a topic that often comes up, and the best solution is often the easiest. When the target has moved a certain distance (say 5m) then run the pathfinder again, or if the target has moved a bit and a certain time has elapsed (say 5s) then also run the pathfinder again.

The only change that was required in the AI Sandbox code was to not tell Locomotion system to stop and go into Idle, and instead keep it Running. The results look pretty good, despite some issues in the underlying locomotion...

Locomotion Improvements, by Alex

Screenshot 5: The set of walking motion clips (individual steps) that make up the walking motion graph. In yellow are left to right steps, and in white are right to left steps. (Click to enlarge.)

So far, we've only used running motions in the AI Sandbox since the motion capture we used was focused on running, jogging and sprinting. There were also some walks, but they weren't as useful nor as good quality animations. Luckily, Richard touched them up for the new skeleton, and re-exported them and so I had something to work with. However, I ran into a bit of trouble with the motion graph code…

The core of the problem was the footplant detector. I need to detect footplants for the different types of motion, so they can be aligned when the foot is down. For the running motions, I managed to create a function that extracts local minima of the feet position, and filters out the false positives to hopefully end up with only the exact footplants. However, this broke for the new walks and I had to make some adjustments to thresholds and various parameters.

The code works fine now, but in retrospect there are a few problems with our current solution, in particular that the code finds footplants one side at a time, and does not consider the other foot nor the rest of the body. There’s lots of academic research on the topic, but so far we’ve not needed it. That may not be the case for much longer!

One last problem that remains is a question of alignment in the motion graph builder. The two walk steps are not aligned or synchronized quite perfectly, which leads to a zombie like walk. It works well enough for this game though, since it is supposed to be an old cleaning lady, but this part of the code will need reworking a little in the future…

Game Logic & User Interface, by Radu

Another technique that proved its value was the Model-View-Controller pattern. I was quickly able to get a “treasure” entity and graphical representation in the world, but I was very surprised how I was able to create a game state controller class that handled win/loose conditions and plug painlessly plug it into the main controller manager class of the game.

The duties of the controller that handled the burden of handling the game state included monitoring of the player position in regards to the guards, checking if the player collected all the gold on the map. While collecting all the gold would trigger loading of the next level, the consequence of getting caught by the guard was basically a game reset. The reset, as trivial as it sounds, implied resetting player position and gold, resetting the blackboards and positions of the NPCs and finally stopping the execution of the BT interpreter. Except the behavior part, everything went smoothly.

Resetting the behavior called for cleaning of the blackboard that holds a considerable amount of data. It became apparent that in the near future this has to be broken up into smaller components (ordering, visualization flags, patrol data…) to avoid the monolithic construct that the blackboard is slowly becoming as it holds more and more data. Another minor issue was stopping the interpreter that called for a stop feature to the Brain base class in the behavior library. As necessity is the mother of invention, this feature will surely make it in the next sandbox release.

Screenshot 6: The user interface elements all together, including top-left progress icons, over-head indicators for each NPC, and the game logic overlay for winning/losing the game. (Click to enlarge.)

With a couple of hours left on the clock, and energy resources slowly dwindling I got to revisit the UI system I have written some months ago. The test driven development approach used a while back, suggested and enforced by Alex, really paid off in full. The UI code quickly fell into place with the use of the WidgetHelpers and behaved as expected without any problems. Most of the time was spent tweaking colors, timing and finding nice pictures to use.

Gameplay & Levels, by Alex

In parallel, I was working on the camera system. I started out experimenting with a GTA-like follow camera; being involved so closely certainly increased the tension of the game. However, such cameras often need a lot of fine tuning to work, for example to avoid obstacles when walking close by, to move into better positions to help the player visualize the scene. Instead, I reverted back to the top-down camera which we used for early development, but I moved it lower down to only about 25 meters above the ground and added smooth following movement.

This camera works relatively well because you can see the action locally around you (as the player), but you can't see the whole level. You still have to be careful locally, but you have enough information to be tactical. Also, it makes it much easier to see what the AI is doing, so we can show it off in a better light this way, for instance watching the old cleaning woman track you down!

After that, Radu and I spend a bit of time tweaking variables that affected the AI, for instance the field of view or maximum view distance, and various timers that trigger behaviors. Speed and movement of the player relative to the guards was also an important thing to adjust, and I experimented with a simple sprint button locally to help you loose guards if you get into too much trouble. Finally, I finished of implementing random level generation, so you get some nice structured levels (with walls and recognizable features) that are beyond just randomly placed blocks.

Summary

The biggest lesson learned here is probably the most common from Game Jams: short events are really great for focusing attention and development on ambitious yet achievable goals. This was both great for those of us that worked on the game, but also for the AI Sandbox itself. We now have a much clearer picture of where to spend our time — although many of the things that came up were already known issues!

Beyond that, a big take-away for me was the real-world value of unit testing. Not just automated testing, we have functional tests for most of the components and systems. I mean specifically unit testing. The parts of the code that were unit tested "Just Worked™" and scaled whenever we needed them without sweating. This was the case for my Behavior Tree code and Radu's User Interface code. The pathfinding in the navigation that Jad Nohra wrote with tests also had no issues... However, the locomotion builder and its component caused lots of trouble — and those have little or no unit tests.

Going forward, we will be holding these AI Marmalades more often, possibly once every month or two. They'll help apply our code and bring the team together for a weekend event! As a result, we'll also have a large prioritized list of things to work on for the AI Sandbox, as we do now!

Discussion 0 Comments

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!