This is the third and final part of AiGameDev.com’s coverage of the research sessions relating to artificial intelligence at the GDC in Lyon. Be sure to read part 1 (about concurrent behaviors and Bayesian learning) and part 2 (about Embodied Communicational Agents).
This article in particular covers research into machine learning, and how it can be applied to strategy war games like BattleGround. Also, you’ll find out about generating dialog based on the emotions and experience of playing through a Cluedo clone.
Photo 1: The Big Wheel in the center of Lyon after the first day at GDC.
Reinforcement Learning for Modern Strategy Games
Charles Madeira gave his talk about A Reinforcement Learning-Based Approach for Modern Strategy Games. In this research, RL is used as a way to make the AI more challenging and reduce development costs, in particular for the RTS game BattleGround™.
While researchers are having some success in practice, Charles’ work addresses the common problems of RL when applied to games:
Slow learning and requiring too many training samples.
Using too much memory to approximate the solution.
Scaling poorly to handle complex problems.
Screenshot 2: Applying reinforcement learning to the BattleGround strategy game.
Charles presented a framework called STRADA, which reduces the complexity of the problem with multiple tricks:
Grouping units as organized groups decomposed hierarchically.
Abstracting the terrain details using automated analysis.
Bootstrapping the learning process using non-random opponents.
Applying learning separately in the different levels of the hierarchy.
The current system performed much better than the already scripted AI system and a random attack opponent, and practically matched human attacking skills. However, the problem of Charles’ framework was that all the learned strategies were tied to a specific terrain. This lack of abstraction in the AI was discussed at length during the Q&A session; there are multiple possible solutions but it remains an open problem.
Designing a Reinforcement Learning-based Adaptive AI for Large-Scale Strategy Games. Madeira, C., Corruble V., and Ramalho G. Second Conference on Artificial Intelligence and Interactive Digital Entertainment, 2006. Download (PDF) Generating Adequate Representations for Learning from Interaction in Complex Multiagent Simulations. Madeira, C., Corruble V., and Ramalho G. IEEE/WIC/ACM International Joint Conference on Intelligent Agent Technology, 2005. Download (PDF) Bootstrapping the Learning Process for the Semi-automated Design of a Challenging Game AI. AAAI Workshop on Challenges in Game AI, 2004. Download (PDF)
Dialogs Based on Emotions, Experience and Personality
Viviane Gal from the CEDRIC Lab at CNAM in Paris, discussed Dialogs Based on Emotions, Experience and Personality. Her research addresses the process of generating dialog based on the player’s behavior, but in a way that takes into account the personality and believability of each actor.
Screenshot 3: An architecture for dialog generation.
The model for the generation of dialogs includes:
- What the player knows already.
Information that he/she needs to learn.
This is implemented as a kind of dialog dependency graph, with each node representing a possible state in the story. However, unlike traditional dialog-trees, multiple states can be active at the same time based on their dependencies being satisfied or not. Each of these nodes can have attached story points and dialogs to be used at runtime by the actors.
Screenshot 4: A dialog graph representing the story.
Here are the related papers:
Dialogs Taking into Account Experience, Emotions and Personality Anne-Gwenn Bosser, Guillaume Levieux, Karim Sehaba, Axel Buendia, Vincent Corruble, Guillaume de Fondaumière, Viviane Gal, Stéphane Natkin, Nicolas Sabouret. International Conference on Entertainment Computing, 2007. Download (PDF)
Do you have any thoughts about the applicability of this research? Post them below!