Article
files/interviews/terrain-analysis

Terrain Annotations: From Designer Hints to Automated Pre-processing

Alex J. Champandard on September 18, 2008

An AI that can’t understand its game environment is pretty useless; but that’s what terrain reasoning is for! In most games, implementing terrain reasoning starts with annotations of the terrain, which can either be placed manually by designers or automatically generated by a pre-process in the tools pipeline.

Over the last month and a half, I’ve been working with some of our resident experts from the AiGameDev.com forums to create special reports on specific topics. Here’s the first part of a document on terrain reasoning, which covers the annotation process in particular. Now, I’ve been working in game AI for a while, but I have to admit I found many of the answers very insightful — and I even learned quite a few new things!

To give credit where it’s due, here are the four of our resident experts who contributed to this report (in order of alphabetical importance):

  • Kevin Dill — Works as a Senior AI Programmer at Rockstar, having previously worked on Master of Orion 3, and Kohan 2.

  • Sergio Garces — Is a Senior AI Programmer at Radical on Prototype. He previously worked the AI of Praetorians and Imperial Glory.

  • William van der Sterren — As a consultant for CGF-AI, he assisted Guerrilla Games in creating the AI for Killzone and Shellshock Nam ’67.

  • Paul Tozour — Is best known for creating the AI for Metroid Prime 2 & 3, as well as Deus Ex 2 and Thief 3 currently on Project Offset at Intel.

You can download the whole thing here:

Terrain Analysis, Special Report — Part 1
Download PDF (1.5 Mb, 10 Pages)

We’re still in the process of formatting the rest of this document (it’s huge, there’s so much useful stuff in it!) as well as other topics for the upcoming launch of our very own membership site. Over the next few weeks, you’ll be getting more of a taste of this kind of content: down-to-earth, up-to-date and practical.

Stay tuned, and be sure to sign-up free as an insider for our upcoming AAA interviews!

Discussion 3 Comments

zoombapup on September 19th, 2008

Interesting. One thing that occurred to me while I was reading, was the option of using designer placed annotations, but using automated means for testing the designer choice. Basically, you'd give the designer really simple tools for annotating the levels. Then you would have a "check AI" button in the toolset, which would simply flag up any annotations that dont meet specific rules. The idea being that like a leak-check in BSP, you flag up any errors or potential errors in case the designer wasnt aware of the issue. A good example given was that sniper spots being blocked by large buildings. So you would have each sniper spot do a bunch of raycasts and if any of those hit a large blocking object, you would flag that as a potential issue. I had a big argument with a friend over this issue of designer vs automation and I firmly come down in the "automation where possible, but ultimately must be designer controlled" camp. I just dont think you can hit all the edge cases which would make a fully automated system viable.

JonBWalsh on September 19th, 2008

A great read, I really hope to see more things like this as I found it practical and interesting. [quote]In a past project, we flagged all props with the type of cover they offered the AI characters, so they would automatically know how to use them.[/quote] This seems like a great way to approach the problem and I wonder how often it's used or how it could be expanded further. Having props automatically place cover points (that a level designer could then move or tweak) could strike a good balance between automated and manual process to allow for rapid prototyping while also not relying on a level designer to properly place every single annotation. I wonder what other real-time processes could be put into place to generate more automated annotations. [QUOTE=zoombapup;4927]Interesting. One thing that occurred to me while I was reading, was the option of using designer placed annotations, but using automated means for testing the designer choice. Basically, you'd give the designer really simple tools for annotating the levels. Then you would have a "check AI" button in the toolset, which would simply flag up any annotations that dont meet specific rules. The idea being that like a leak-check in BSP, you flag up any errors or potential errors in case the designer wasnt aware of the issue. A good example given was that sniper spots being blocked by large buildings. So you would have each sniper spot do a bunch of raycasts and if any of those hit a large blocking object, you would flag that as a potential issue. I had a big argument with a friend over this issue of designer vs automation and I firmly come down in the "automation where possible, but ultimately must be designer controlled" camp. I just dont think you can hit all the edge cases which would make a fully automated system viable.[/QUOTE] I like this idea as well, and I think I'm in the same camp as you. At the very least the designer should be able to guide the automated process. A way this could happen is if you have the few concrete annotations for designers ('cover', 'choke-point', etc.) then the automated process could go in and further break the annotations down (such as determining which cover is good cover, which is poor, etc.). Ideally the process would be 'optional' so levels would be playable prior to any off-line process.

William on September 19th, 2008

[QUOTE=zoombapup;4927]I had a big argument with a friend over this issue of designer vs automation and I firmly come down in the "automation where possible, but ultimately must be designer controlled" camp. I just dont think you can hit all the edge cases which would make a fully automated system viable.[/QUOTE] Agreed. One of the edge cases which requires designer control are "map boundaries" in FPSs. Many FPSs (Call of Duty, Medal of Honor, Brothers in Arms, Killzone) suggest wide open terrain while at the same time boxing the player into carefully picked lanes and areas; barbed wire, ledges, fences, wrecked cars, indestructible barn doors etc. are used to contain the player. Typically, the terrain representation is constrained to the same player areas, because that's less work for the designer and yields better path-finding performance. The result from this constrained terrain representation is that the AI 'knows' there will be never be threats coming from that road on our left, and doesn't cling to cover and scan its surroundings when near an "map boundary" as much as the player would expect. We ran into this issue for Nam: Shellshock '67 (2004) and added a hack to the terrain analysis to reduce the problem; designer placed 'threat sources' would be have a nicer solution.

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!