Another weekly roundup of fascinating Smart Links from the web, including interviews and behind the scene videos, is brought to you by Mark Wisecarver and Alex Champandard. If you have any news or tips for next week, be sure to email them in to editors at AiGameDev.com.
Epic’s Steve Polge Explains the AI in UT3
An interview with the programmer behind the bots in Unreal Tournament 3 discusses which are the toughest areas to get right, including challenges of modern hardware. As it turns out, making an NPC “feel” human, with the same kinds of reactions and limitations is definitely the most challenging problem for game AI.
The whole interview is worth reading quickly, but here’s an extract about machine learning:
“One of the ways that UT3 bots learn during gameplay include dynamically adjusting the costs of the path network to reflect things like “killing zones”. This allows them to learn areas to avoid because they are covered by a sniper, for example.”
The Unreal Engine isn’t famous for its AI technology, but there’s certainly a lot to learn from there!
Artificial Technology Announces Eki One Middleware
The German company Artificial Technology has announced that its new modular AI middleware, Eki One, will be released in spring 2008. It’s unclear what the software will provide, except it supposedly helps game producers take advantage of cutting-edge technology and reduce development time and costs at the same time. From the website:
Expansion of the classic AI with new approaches from research and elements of psychology.
Simple integration in existing software solutions.
Exceptional user friendliness with use of intuitive GUIs and documentation for beginners.
GUI which adjusts to the specific workflow of the studio
The company should be at GDC ‘08 in San Francisco. More details should be available soon.
Artificial Technology Announces Eki One, New Funding, gamasutra.com
The Force Unleashed with Bio-mechanical AI
In a new wave of games heavily based on simulation, and in particular Natural Motion’s Euphoria which provides behavioral AI, here comes LucasArts experiments with the Star Wars franchise.
In theory, this new technology should help make the gameplay much more dynamic:
“The addition of Euphoria removes the canned scripted reactions of the enemy, sort of a Biomechanical AI. The same action by you, will not always generate the same reaction in the enemy. They can and will react differently to the same stimulus. As it was described in real game use, the enemies have self preservation built into their logic.”
Assassin’s Creed proved that good technical innovation on a next-gen console is enough to sell a game, so this game should become a hit even if that gameplay doesn’t work out so well.
Original The Force Unleashed Documentary, lucasarts.com
The Force Unleashed: Pre-Review, theforce.net
Flocking Finally Explained?
We’ve all seen large flocks of birds exhibit this behavior: thousands of them instantly changing direction, right, left, incredibly quick dives. Even using Craig Reynold’s steering behaviors, coding this behavior into a game does not always meet expectations.
“Scientists have uncovered a simple rule that explains how thousands of starlings flock in formation and hope to use the discovery in the future to coordinate swarms of robots. […] The reasons why the starlings are able to fly with Red Arrow precision in vast numbers, tumbling and banking in nervous unison and without colliding, has tantalized scientists. Now it turns out that the secret is for each bird to track seven others, says the first detailed direct observations.”
Here’s an opportunity to learn from nature to make thousands of Starlings move as one. It shouldn’t be too hard to implement either!
Study of Starling Formations Points Way for Swarming Robots, telegraph.co.uk
New Data on Flocking of Starlings, gamedev.net
This game seems to be mentioned regularly, including in the game AI forums (registration and introduction required). This game by Bay 12 is a single-player fantasy game where you control a dwarven outpost or an adventurer in a randomly generated, persistent world. The game features a dozen of AI systems used to generate random worlds or control creatures.
Screenshot 1: Procedurally generated worlds of Dwarf Fortress.
Although Dwarf Fortress is still in a work in progress, many features have already been implemented. The world is randomly generated with distinct civilizations, dozens of towns, hundreds of caves and regions with various wildlife. The world persists as long as you like, over many games, recording history and tracking changes.
Have you played the game? What do you think about this kind of persistent procedural world?
Turok’s Dynamic AI System
There are a few juicy details about Turok over on Gamasutra (see last week’s video too).
“Not only do we have an AI on the part of the human opponents, where you can take one, basically plop him into a world, and he’s able to make decisions and decide how to survive moment-to-moment, and how to seek out enemies and eliminate them using a variety of tactics. […] Where we’re breaking ground, I think, is applying that same approach to creatures, giving them behaviors and instincts and decision-making capabilities within the world, but then allowing them to interact with one another within the context of a basic ecosystem, and then also interact with the humans. When you get that complexity of different layers and mixing and matching those together, it results in a lot of really interesting situations.
One of the things we discovered in approaching dynamic AI systems is that until you actually have the system really well implemented and integrated, it’s very difficult for the level designers to plan out and create scenarios. Whereas before, because it’s so fixed in terms of exactly what’s going to happen in every situation, you can kind of design it all out up-front, and understand, “Okay, this is exactly how this encounter is going to play out.”
But because the encounters in our game are so driven by the AI system, until that AI is complete and you can start populating the world, the process of designing levels has a sandbox element to it as well, because you’re putting creatures into an area, and you’re putting humans into an area, and you’re watching what happens and tweaking it to create more thought, in terms of the experiences. So it is set up. Obviously the level designers are going through and trying to create these really interesting areas and scenarios and stuff, but a lot of it is driven on their creative side by the AI systems.”
The take-away idea here is that you need feature complete AI before you can start considering building an AI sandbox.
AI in F.E.A.R Sequel Leverage Rich Environments
John Mulkey, lead designer of Project Origin, shares details on what fans can expect from the upcoming sci-fi first-person shooter.
“The most obvious difference that will hit the player right away is in the visual density of the world, F.E.A.R. looked really great, but where F.E.A.R. would have a dozen props in a room to convey the space, Project Origin will have five times that much detail.
Of course, this will only serve to further ratchet up that ‘chaos of combat’ to all new levels with more breakables, more debris, more stuff to fly through the air in destructive slow motion beauty.
We are teaching the enemies more about the environment and new ways to leverage it, adding new enemy types with new combat tactics, ramping up the tactical impact of our weapons, introducing more open environments, and giving the player the ability to create cover in the environment the way the enemies do.
Project Origin is scheduled for release on PC, Xbox 360, and PlayStation 3 later this year.
F.E.A.R Sequel Promises Visual Density, Better AI, gamepro.com
Project Origin Discussed on IAonAI, intrinsicalgorithm.com
Roadmap for LuaJIT
Mike Pall announced that LuaJIT 1.1.4 will be released in the next few days, as a minor bug fix upgrade to catch up with Lua 5.1.3. The underlying library, CoCo 1.1.4 will be released at the same time, because Lua 5.1.3 makes some incompatible changes to the C call depth counting (basically reverse-patching these parts back to Lua 5.1.2).
Here’s some gossip about LuaJIT 2.x though:
“I’ve been working on LuaJIT 2.x for quite some time now. It took much longer than I had expected. I’m now at the third complete redesign, because the first two approaches I tried didn’t work out: 1. shadow SSA info to augment the old bytecode got too complex. 2. one-pass on-the-fly SSA generation doesn’t work for Lua because of an ugly corner-case involving upvalue semantics.
However I’ve gotten quite far with the third approach, namely a fast interpreter combined with a trace compiler (detailed description at the end of this posting).
LuaJIT Roadmap 2008, gmane.org
‘Geek’ Game of Go Gains Popularity
An nice little article about the board game of Go, in particular how it fascinates AI programmers! Go is one of the last human-created intellectual games that a computer cannot come close to matching.
Steve Ersinghaus follows up on an IDGA post that was mentioned last week also:
“Fiction writers or other writers who know what good dialogue is have to [be] involved in the development of these systems. The person who lives inside the voice of a persona needs to be drawn into the game. Here?s a question: can players write good dialogue?”
On Game Dialogue, steveersinghaus.com
Using AI to Convert Images Into Models
This isn’t quite ready for the games industry yet, but Make3D converts your single picture into a 3-D model. It takes a two-dimensional image and creates a three-dimensional “fly around” model, giving the viewers access to the scene’s depth and a range of points of view. Here’s how it works:
“[The system] captures a variety of monocular cues and learns the relations between different parts of the image using a machine learning technique called Markov Random Field (MRF). Our algorithm first divides the image into small patches and analyzes them at multiple scales to estimate each of the patches’ 3-d location and 3-d orientation.
We have applied variations of this algorithm for driving cars autonomously, for robotic manipulation, for making 3-d models of large environments, and for creating visually-pleasing 3-d throughs from an image.”
How long before this kind of technology is used to improve modeling times in game studios?
Stay tuned next week for more smart links from the web!