Roundup
shodan_01

Game AI Roundup Week #50 2008: 10 Stories, 1 Video, 1 Demo

Alex J. Champandard on December 22, 2008

Weekends at AiGameDev.com are dedicated to rounding up smart links from the web relating to artificial intelligence and game development. This week, as always, there are some good articles and blog posts for you to read. Remember, there’s also lots of great content to be found in the forums here! (All you have to do is introduce yourself.) Also don’t forget the Twitter account for random thoughts!

If you have any news or tips for next week, be sure to email them in to editors at AiGameDev.com. Remember there’s a mini-blog over at news.AiGameDev.com (RSS) with game AI news from the web as it happens.

Is an AI-controlled Narrative Really Beneficial to Players?



On his Playwrite blog, Logan Booker wrote his thoughts about the IEEE article entitled “Bots Get Smart” that we posted a few weeks ago. In particular, he analyses the benefits of procedural narrative in practice:

“Fundamentally, PaSSAGE shifts the focus from the ‘one situation, many solutions’ design philosophy of most role-playing games to a “many situations, one solution” one instead. This doesn’t mean it’s better, just different. Content creators go from crafting a limited set of specific scenarios to a metric crapload of generic ones. The smart thing to do, at least from a pipeline perspective, would be to make it so monsters, treasure and NPCs can be swapped out to recycle as much content as possible. However, a designer could manually create scaling scenarios if there’s a fear it’ll be too generic.”

AAMAS 2009 — Budapest

Over the years, AAMAS has built up a reputation as the conference on autonomous agents in multi-agent systems, and occasionally showcases research that’s relevant to game developers. The next event will take place next year:

“You are cordially invited to participate in the Eighth International Conference on Autonomous Agents and Multiagent Systems, to be held May 10-15, 2009 at the Europa Congress Center in Budapest, Hungary. AAMAS is the leading scientific conference for research in autonomous agents and multi-agent systems. The AAMAS conference series was initiated in 2002 as a merger of three highly respected individual conferences: the International Conference in Autonomous Agents, the International Workshop on Agent Theories, Architectures, and Languages, and the International Conference on Multi-Agent Systems. The aim of the joint conference is to provide a single, high-profile, internationally respected archival forum for research in all aspects of the theory and practice of autonomous agents and multi-agent systems.”

Havok hints at next product

This falls into the likely rumors section. AiGameDev.com sponsor Havok has hinted that their next product will focus on dynamic and heavily physics-based worlds:

“Speaking to Develop, David O’Meara, MD of Havok, has hinted that the company’s seventh product will focus on in-game character behavior in dynamically destructible environments.”

AI Checkers Tournament

Marc Backes writes his experiences developing Abraham: a checkers program that he co-developed to enter a contest. Checkers has theoretically been solved, but there was a time limit of 15 seconds here, which made the event challenging.

“Our program was programmed in C++, and we used a text mode presentation to display the board. We hoped, when we killed all unnecessary processes (graphical user interface, network manager, daemons, …), we would be faster. As well, we started the program with a “nice” value of -20 (highest priority).”

This just goes to show the importance of creating reliable code!

Architecture: Animation / AI Layer

Veteran developer Dan Kline wrote about his current experiments with animation systems:

“In practice, most AI Behaviors want to be able to switch to every other behavior. Consider Behavior Trees, where there are traditionally no limitations on valid transitions, just pre-conditions in the Decision Making layer. If the Behavior says it should run, 9 times out of 10 the Anim Coordinator should try and comply. That means every macro Anim state in your FSM is connected to every other macro Anim state (assuming some sort of hierarchical organization for sanity). That’s the worst case for an FSM architecture imaginable. Oops. Some games have used planning to map through this morass, but that doesn’t make the FSM any easier to work with, it just means you can have longer independent state strings.”

This is a huge topic, and one we’ve covered extensively here at AiGameDev.com too.

“Exclusive” Interviews about F.E.A.R. 2’s A.I.

There have been a bunch of interviews with Monolith developers since the behind the scenes video about F.E.A.R. 2’s AI:

Q: F.E.A.R.’s A.I. was lauded for its craftiness and realism – three years ago. Yet here we are now, and few games have passed or even approached F.E.A.R.’s lofty heights. Why is that? Do game developers care more about tightening up the graphics on level three than improving NPC intelligence?

A: After Shogo, we decided AI needed to be a much higher priority. If you’re making a game based around fighting NPC enemies, the only way that’s going to be really fun is if they present a suitable challenge. Part of it is making the enemies tactically smart and showing them coordinating with each other, but you also want to feel like they have a desire for self-preservation.

Q: Were you worried that such an immense piece of firepower could be detrimental to the tight claustrophobic feel of the game?
A: Sure, it could have been too liberating, absolutely. But then we started to analyse our AI. The way that it works is, there’s nothing scripted, everything is stimulous-based, so the more that we can educate the AI about the environment, the better they can utilise it to their advantage. So we let them be aware of fire and other environmental hazards – they are aware of potential combat opportunities. So as you start to play you’ll notice, when you make a very specific action, they will have a very appropriate reaction. Also, if you look at our environments, you see real quick – we still recognise that tight enclosed space is great for high-intensity combat. But too much of that can become numbing, so we want to expand some of the space, and as soon as we started to do that, we realised that the pacing and the flow of combat changed. We looked at it and asked, what are the strengths of close, frenetic combat and how can we play them against the open areas to create a good ebb and flow. We use environmental volume as a way to give pacing.

Q: More than just governing the way characters behave, and the way NPCs behave in the game, AI could be more impressively used, or maybe more effectively used, in things like that.

A: In terms of making an individual AI encounter better? To some extent, we’re reaching the same problems that we’re having in graphics. Every generation, the technology gets better, right? We’re supposed to be able to push more polygons; we’re supposed to have more shaders on the screen. The problem with that being that you now have to have artists create X amount more content. So I think we’re going to start seeing, in terms of graphics, you’re starting to see a slowdown. Between the PS1 and the PS2, there was a huge leap, I feel, in graphical fidelity; and then less so between the PS2 and the PS3.

And I think we’re kind of reaching that same point with AI. I mean, we can make the AI incredibly much more complicated, but it still requires animators to create thousands of animations, versus hundreds of animations; it requires the character artists to create far more detailed maps, and when you create far more detailed character maps, players expect full facial animation, which requires even more artist content.

And then the AI needs to know more about the world, in order to behave that much better, so that means that the level designers spend a lot more time carpeting a level with AI hints. So, one of the big events that I think we’ll see soon is a lot more automation in the way that AI is placed in the game. Which doesn’t necessarily mean a direct influence on the way the player perceives it, but it’ll be much easier for the game makers to make the game, which means that they’ll be able to focus more time on improving the AI in other areas.

Neural Nets and Visualization

A Danish developer has written about his experiences applying neural networks to a Pacman game:

“Although the best AI controller I created for Ms. Pacman used pretty simple AI, I spent a lot of time looking at several more advanced techniques, including neural nets. I never figured anything special out, but I did make a pretty flexible and fast-performing neural net in C# and a visualization using (the then pretty new) Windows Presentation Foundation - that version is in the link to the Ms. Pacman controller.”

A neural network is not necessarily any more advanced than state machines or scripts, but this sounds like an interesting project nonetheless.

Learning Kendo Autonomously

“This demo combines a Genetic Algorithm along with a Neural Network. Each fighter has its own Neural Net, and every 100ms it applies “weights” to each of 11 different variables in the game (such as current positions, velocities, whether they’re striking, etc). By weighting and relating each of those variables, the Neural Net generates “outputs” indicating what the fighter should do (such as new velocity and whether / which target to strike). Coming up with the weights (or “training”) is the trickiest part of Neural Networking, but that’s where the Genetic Algorithm comes in.”

Resistance 2 Review

Cooperative AI has certainly been a trend this year, but as ever, it’s really tough to pull off. This informal review of Resistance 2 shows exactly what that means in practice:

“When hordes of those stupid zombies run towards you and your AI buddies do nothing but standing around. See, the AI of opponents and teammates is a mixed bag. Sometimes it works and squadmates are really helpful. But other times you happen to be in situations mentioned above. Also the AI of the opponents can get quite unfair. Imagine standing amidst a big battle doing nothing. In such moments the AI won’t care about you, there is not one shot directed at you. But as soon as you’ll fire the first shot all hell breaks loose and everything is shooting at you as if your squadmates don’t exist. Now imagine there are 30 zombies running towards you and 3 teammates, your teammates doing nothing and the AI only attacking you. Not much fun.”

Procedural Game Narrative, Trend of 2008

Many AAA games this year have tried to innovate, and some with more success than others. Far Cry 2, according to Gamasutra, is one of them:

“While a great majority of games continue to use cutscenes to tell their stories, the emergence of significant new narrative forms has given game developers plenty of food for thought in 2008. At the forefront of this is perhaps Ubisoft Montreal’s Far Cry 2, which has an extremely dynamic world, with enemies that help each other to safety when wounded and an incredibly complex fire system, alongside ambitious narrative system that reacts to player actions in its sandbox world by dynamically reassigning dialogue to available actors. According to the game’s narrative designer, Patrick Redding, “If we had tried to not support that dynamic approach, what we would have ended up with is a story that really felt like it was kind of progressing along more or less independently of player action… And we felt there was no point in doing that.” With all kinds of artful, amazing events dynamically created by the randomness inherent in the game world — such as bounding African fauna causing enemies to crash their vehicles — it creates an expansion that’s new and different almost every time. But as the game underperformed at retail, a question that probably needs asking is — do players really want a living world, or do they just want scripted events that convince them they are playing in one?”

Genetic Program and Traveller

Interesting story posted about Doug Lenat’s early work, about a program that could self-improve to play the game of Traveller.

“Since a fleet may have as many as 100 ships–exactly how many is one more question to decide–the number of ways that variables can be juxtaposed is overwhelming, even for a digital computer. Mechanically generating and testing every possible fleet configuration might, of course, eventually produce a winner, but most of the computer’s time would be spent blindly considering designs that are nonsense. Exploring Traveller’s vast “search space,” as mathematicians call it, require the ability to learn from experience, developing heuristics–rules of thumb–about which paths are most likely to yield reasonable solutions.”

The story is a good read, but should be taken with a pinch of salt — as romantic stories of programs 25 years old often tend to exaggerate.

Self-Improving Artificial Intelligence



“Lecture by Steve Omohundro for the Stanford University Computer Systems Colloquium (EE 380). Steve presents fundamental principles that underlie the operation of “self-improving systems,” i.e., computer software and hardware that improve themselves by learning from their own operations.”

Discussion 0 Comments

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!