Article
files/photos/2008-SanFrancisco2

Designers and Testers: Friend, Foe, or Just Frustrating for Programmers?

Dave Mark on March 4, 2008

This article introducing AiGameDev.com’s weekly developer discussion was written by Dave Mark, the AI guru behind Intrinsic Algorithm. Let him know what you think and post a comment below!

GDC AI Roundtable

The Game Developers Conference has always been known as a place to go learn things about the industry and the technology. It is also a prime arena to put timely concerns out in the open for like-minded people to kick around a bit… A place to ask questions and to hopefully get answers. However, sometimes those questions don’t get answered completely — even in a roundtable setting.

This year’s GDC provided an example. In the first AI Roundtable of the week (notes & audio), the conversation quickly turned to what bordered on a gripe session.

We had been talking about the pros and cons of “emergent behavior” (look past the point that the term seems to have a rather amorphous definition lately). I summed it up with what I admit was a leading question designed to stir things up somewhat (34:00 in the audio):

“Two things have been touched on in a very similar vein: One, emergent behavior [annoys] the testers. And two, not having absolute dictatorship control over individual units [annoys] the designers. They want that specific thing to do this exact thing ‘until I’m done with it and then you go ahead and do whatever you want.’ So how much of a battle are we fighting, then, because people are afraid of testing emergent behaviors or afraid of yielding up control to and AI ‘black box’ system?”

GDC AI Roundtable

Photo 1: The left side of the Day 1 AI Roundtable at GDC 2008.

Enabling Designers

There were a number of thoughtful comments and answers to that. One, for example, pointed out that our job is to enable the designers to do their job. However, that was countered later by a comment that designers often don’t realize what we can do to make their jobs easier until we educate them. Adam Russell (ex-Lionhead, now back in Academia), in a self-admitted shameless plug for one of his articles for the new “AI Game Programming Wisdom 4″ (37:57 in audio), commented that maybe his article was a bit harsh in attacking designers by claiming that they needed to move away from total control. Instead he suggested that:

“…both sides need to move away from their heartland into a more integrated approach. Designers need to be more procedural in their thinking and AI people need to be more up for supporting authorship.”

Can’t Everyone Just Get Along?

The result of this, of course, would be in order to facilitate the communication. In a sense, educating the designers and expecting them to think more procedurally, as Adam mentioned, would enable the AI programmers to do their jobs. In fact, much of the ensuing conversation was about facilitating conversation between team members — whether it be open and mixed workspace environments or simply better defining the pipeline of ideas.

With regard to testers, those comments tended towards the notion that, “emergence means you just can’t test everything.” And, given that premise, emergence is going to be something that the QA (and the rest of the team) is going to have to live with. An example was that of a sports game (there was a “FIFA” guy in the house). There are basically 22 independent agents on the field at all times, each doing their own thing. And they aren’t simply background characters, either — they are the characters. There is no way to test all the possibilities. The only thing you can do is to have QA report things that just look outright stupid if and when they occur.

GDC AI Roundtable

Photo 2: The right side of the Day 1 AI Roundtable at GDC 2008.

Everyone’s a Designer

In the end, one statement rung out over them all. “We are all designers.” At least to some extent or another. Whether it be artists, scripters, voice actors, engine programmers, physics programmers, or AI programmers, we are all responsible for presenting some portion of the final product to the consumer. Therefore, the notion that a “designer does the design and programmers do the programming” is rather arbitrary. Programming requires us to design the algorithms that actually give purpose to the code. Likewise, as Adam mentioned, it is up to the designers to assemble their thoughts and ideas into something that, in a vague sense, resembles pseudo-code. (”The monster will do this and, if this is in effect, will proceed to doing that.”) Job descriptions amongs the team begin to look more like Venn diagrams at that point. John Abercrombie (2K Boston - AI lead on BioShock) talked about how their team all sat in large intermixed groups of artists, designers and programmers. That way, they could listen to each other’s issues and offer suggestions on problems as a group.

So, to warp this into this week’s discussion column… close your eyes for a minute and imagine you were there in the roundtable that day (again, you can use the audio file to put you in the mood). Now, let me pose the question to you:

With AI becoming so much more important for the experience of the gamer, how do we bridge the gap between AI programmers and designers? How do we assemble systems that perform the requirements of those designers but also goes beyond the canned scripting to make for a living, dynamic, immersive world. How do we do all of this without causing QA to go on strike (or QA to strike us)? Or, in the spirit of the GDC slogan, how do we enable our teams as a whole to “make better games?”

GDC AI Roundtable

Photo 3: Adam Russel with a camera. If you are a designer, remember, he later apologized!

Discussion 7 Comments

alexjc on March 4th, 2008

Wahay! Now I finally have a chance to join in the discussion... Thanks Dave :-) I'll start by questioning the underlying assumption that there has to be a compromise between designer control and autonomy. A handful of recent AI techniques basically allow you to do both. I am, of course, thinking of hierarchical logic like HTN planners and behavior trees when they are structured in a goal-directed fashion. This means you can get the benefits of a planner along with the ability to micro-manage anything. That, as far as I'm concerned, makes any lamentations from programmers inexcusable. If you're not even trying to provide this type of flexibility to your designers then you're a few years behind the state-of-the-art. Once you've done that, the designers will use whatever approach they want to achieve their goals based on the current situation, level, game and genre. That's the secret to "making better games" and creating richer environments. In my experience, I've found that if you give designers amazing tools to work with, they'll be amazingly creative. Conversely, if your designers do something wrong with those tools, then it's a flaw in the system/tool that you should fix. And that is by far the most fulfilling thing you can strive for as a programmer. As a designer on the other hand, it should be your job to make the technology shine. It's certainly a symbiotic relationship. As for testing, I think that deserves a whole new discussion of its own ;-) Alex

zoombapup on March 5th, 2008

One thing that needs to happen, is that the quality of tools needs to be upped to the point where designers get used to working with the AI enabled constructs in such a way as they arent just hacking at it. Have you ever seen a designer script something? Its quite often a mess. I'm sure most of the designers I've met would be happy to have some kind of visual programming "language" where they can click together behaviors. So maybe the biggest battleground right now isnt in the AI systems, but in the editing interfaces and designer control systems.

Jare on March 6th, 2008

Phil, I'm not sure it's a problem of editing interfaces; perhaps it is related to designer control systems, if we both mean the same thing with that. We have increasingly complex agents, made up of interconnected systems like: - innate abilities: to follow a patrol route, to find a path to a target, etc. - perceptions: we limit the knowledge an entity can acquire in order to mimic that of the player himself: cones of vision, obstructions, etc. - memories: I remember where I last saw you, where were you heading, what did you look like, etc. We also have higher constructs because the player no longer accepts 3 soldiers not acting like a coordinated platoon: avoid obstructing each other, splitting roles (you cover fire while I advance), etc. On top of that, we limit the ability to "make things happen" and instead rely on computing the right inputs for the physics engine. Designers normally love all this, but sometimes they decide they need guy G at point P doing D. It's easy for programmers to give them hooks for that, and even assist them in scripting them properly (a practice I encourage). The problem is in making such things happen within the context and limitations we WANT the game to enforce. A dialog that many programmers will recognize: - I need to make a guy show up there when the player arrives. - There you go, do CreateChar and it happens. ... - Ok, that worked great, but now I see him popping into existence there. - You can create him behind the building and then tell him to GoTo the right point. ... - That worked, and it's cool to see him do that, but now if the player is running fast he will have gone past that point and missed the guy. - You can SetSpeed the guy to run fast so the player can't be that fast. - But I don't know how fast I need to make him, and I have to reload and replay the mission for every try. This could take all week just tweaking one number! - But you can reset and replay that part of the mission with your tweak if we just set some extra debugging checks in the script. Here, let me show you... ... - Hi again! I did that but the speed had to be very high, and now if the player is approaching from the east he can see the guy running and it looks goofy then the guy suddenly stops for no apparent reason. - Grrr... Hey, do you REALLY need to have that guy up there precisely at that point? Etc etc. You had the functions, you had a programmer to edit the thing, and you still had a problem. How long this kind of thing goes on before a solution, workaround, or alternative is found does vary, but the pattern in such discussions tends to be that doing X, while perfectly possible and easy, does not play well with the rest of the assumptions, goals and mechanics of the game. In order to make X work, you need to spend additional effort so X when A, X when B and X when C all work AND make sense. The designer has the tools, but he needs to explore the possibility space and build a script that takes it all into account; he needs a problem-solver mentality that is more often found among programmers. Sometimes he will mutate the scripted event into a systemic process, and then find that the thing feels mechanic, generic, and has lost its memorable uniqueness. Sometimes the programmer can add new checks and functions for certain circumstances and combinations, and then face a bloated codebase that does so many things and has so many parameters, for reasons he doesn't remember anymore. In the end, you have to accept that simple things in a complex environment can't be simple anymore - and hopefully schedule them as such.

Sergio on March 6th, 2008

The disconnect between coders and designers is not the only one I've seen. In fact, I'd say this particular problem is not with designers in general, but with level designers. And it's not their fault, it's just that their job is hard. Level designers need to create an experience for the player, and the tools they have are the ones the programmers provide, based on the game design. But sometimes they can't find good ways to use the game mechanics (and annoy game designers), or end up scripting a set-piece scene rather than trying to tune all the possibilities in an emergent situation. We have to be understanding, though. Level designers basically depend on the whole game working perfectly to do their jobs. Stability and unfinished/buggy features are their bane. Also, being realistic, often they are brought quite late to the project, and they're overworked/understaffed. Add to that the fact that not all of them have a programmer problem-solving mentality, like Jare said, and you can understand how they can usually resort to doing the simplest thing for them, which is to script tightly the game missions. The solution is complicated. We're working in an interactive medium, so I firmly believe that level designers need to improve technically, and be able to think procedurally. They need to become less of a film director or a writer. I would like to see this trend extend along the industry, although lately hits like Call of Duty 4 seem to be heading in a different direction. But most importantly, they need to have more time to work on their missions, and get the support of the rest of the team. Lastly, for the record, even though I agree that everyone contributes to the game experience, saying that everyone is a designer is taking it too far. A designer is a full time job, and requiring all the coders and all the artists to do it on top of their normal jobs not only is too much to do, it also undermines the importance of the designer position. Cooperation and communication is key here.

memoni on March 8th, 2008

Communication is hard. It is even harder when the meaning of the same word is worlds apart. During the last months I was in Crytek I had extensive talks with designers in order to try to solve the exact same problems as highlighted previously. More than anything we talked about the semantics of the words. This was both fruitful and frustrating since I noticed that it was possible to have a perfectly understandable discussion for hours and later realize that the other party got it all wrong. I guess this is the usual programmer-designer talk where you kinda except that they don't always get the jargon right. They should! What I ended up doing was to figure out the structure of the whole process of building anything AI related in an FPS game. What kind of things needs to be designed to get a new feature (say reaction to bullet hits) in game. What is the workflow of a level designer or an animator who creates new asset for that jump-over-fence navigation thing. What kind of things the programmer build and design. In case of emergent behaviors where to hell do you put the story (everyone hates cut scenes anyway)? How can I make the producers and managers to understand that "just add a flinch reaction" is not just that easy when you need to make it happen in every possible context! Once you know the structure and you can lay it out, it is _so_ much easier to talk about it! In case you are not sure what the other person means you can ask him to point it on the "map". I had no idea how differently designers and programmers talk and think until we get to the point that we were able communicate, not just share words. Finding the structure usually means some constraints to the designers. I think the case that Jare describes is the usual situation where there is not enough constraints on designers and not clear enough structure for the designers to be able to find alternative solutions. The programmers mind is usually wired in such way that when you make something generic you can solve multiple problems with one gun. I think physics is sort of good example of this. AI is different. If we desire human like representation (correct sound and animation in meaningful context) we need to author the content for every single case. At best maybe we can reuse the same asset for a couple of similar cases. Techniques like animation parametrization will help here so that we can create more "analog" assets. Either way, someone needs to author them. While all the complexity of the physics comes from the underlying algorithms describing the dynamics, the underlying complexity of AI comes from the amount of feedback delivered via the representation. The people who long after the "real AI" usually somehow do not get this point. While it would be awesome to have some kind of asset generator I would not be my money that it will happen anytime soon. Especially if we refuse to solve the current more brute force solution. I love emergent behaviors, and I think many designers love them too. But sometimes you need more constrained cases for the sake of the overarching story for example. But making it forced special case will make it really brittle and if reused in wrong context will look horrible. Characters will run around without shooting at the player or just sit in a vehicle and not react at all. The AI architectures I've seen so far all implement the emergence in such way that it is impossible to control and constrain it. The only exception is the Halo stuff which is going in the right direction. I think the monolithic all-sensing-all-seeing approach that we all currently embrace is bad way of building games. Firstly it is really complex to test, secondly it is really complex to do something that is not build into the mastermind and it is pain in the arse to maintain. Many of the current tech solutions try to solve the problem to maintain such complex behavior trees or state-machines. They are not solving he right problem! We should fix the level of abstraction instead. (I may try to shed more light on this later) I think the lack of understanding the structure of the AI in action/FPS games makes 99% of the AI middleware really useless too. They just solve the "easy" problems like path finding or technical implementation of behavior trees and the likes. Yes, it takes time and effort to create fast and robust navigation system but it is nowhere near as hard problem to solve as robust emergent behaviors which look awesome. We definitely need better tools, but before that we need better understand what the heck we are building.

Dave Mark on March 8th, 2008

If you listened to the roundtables, you heard a mantra that I kept repeating that week. It's actually co-opted from Neil Kirby's article in "AI Game Programming Wisdom 1" but it's hardly a novel idea. That is "solve the right problem." I think that as AI programmers, designers, or as a [I]team[/I], that issue is something that often gets a little ephemeral. I think Jare's anecdote was down those lines. Anyway, glad to see some really solid responses here. You've made me proud on my debut discussion article.

memoni on March 10th, 2008

Dave, I tried to listen to them. I'm not native English speaker and had trouble understanding what people said behind that wall on noise. I did check the notes, though. "Solve the right problem" is good mantra. It should at least keeps you on the edge and force you to step back every now and then. But at the same time it is as dangerous as the usual producer mantra "need to communicate better", which too often is misinterpreted as "need to communicate _more_" which is bad because more it actually just means more noise :) One thing I forgot to mention in my last post is that I like the idea that designers should start to think the AI setup on levels more as some kind of procedural thing. The problem with that is that designers do not understand procedural the same way as programmers. (the following has strong action game bias) The procedural bits need to be relatively straight forward and transparent in order for the designers to grasp it. For example something that might be understandable is that a designer could place circle and a point in the level and tell a group of agents to always stay inside the circle and regroup between the point and the threat (player). Depending on the player verbs and the NPC verbs the above situation can lead to different kind of gameplay scenarios, but the setup is simple enough that it is self-documenting. The AI programmer can create a simple debug draw for that situation and it is simple enough even for the designers to see how their adjustments affect the behavior (i.e. regroup time, distance to the player, size of the circle and the location of the point, etc). On top of that you can create different styles (how they move and how they maybe fight back) and add annotation (hide points or "nice" points) which adds detail to the behavior and allows the designers to better fit the behavior into the mood of the scene. Now if you extrapolate that towards the system that was described in the Halo3 talk at GDC, then you a system which has a lot of layered dynamic things working together (you could even have one layer of really reactive dynamic things on the individual agent level) but in understandable chunks that can be communicated and understood by people from different disciplines. I think in this case solving the right problem means not to use the same hammer for all the nails, but to use a solution on each level which solves the problem in most meaningful way. The individual pieces can be simple but the whole can be perceived complex and unique. Ant-on-a-beach...

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!