It's become a cliche among the gameplay & AI programmers to joke about the futility of graphical improvements towards hyper-realism. Ben Sunshine-Hill ranted about rendering individual hairs in beards at GDC 2012 (in this talk at 36:00), and recently Mike Cook was crying into his keyboard at the recent DX12 announcement with a Final Fantasy XV graphics demo. I presume both Ben and Mike were exaggerating to make a point, but there's a common belief that underlies the comical delivery.
To help paint a more complete picture — knowing how seriously such musing can be taken for an audience missing the context — this post outlines some of the major reasons why improvements in graphics are a tremendous thing for AI. Of course, it's a fun challenge to work within limited resources (aren't they always), but it doesn't stop there...
1. Smart Authoring Tools
As the level of realism goes up, this puts even more emphasis on the tools — as it would be prohibitive to keep up with the required levels of detail by just hiring more artists. Game studios are already switching to procedurally-assisted content creation pipelines to reduce their costs, and as graphics realism increases so do the quality of the procedural tools.
- Example: Sunset Overdrive (pictured above) uses Houdini as a tool to assist the prototyping and content creation, with procedural algorithms to speed up generation of a large city based on hex grids.
- References: Procedural and Automation Techniques for Design and Production of Sunset Overdrive (PDF)
- Upcoming: The Procedural Content Generation track at the nucl.ai Conference 2015 features case-studies from AAA (Bohemia Interactive) and indie tools (e.g. No Man's Sky) to help you stay on top of things! Grab some tickets or watch the stream on July 21st.
2. Better Character Animation
As character visuals improve, the Uncanny Valley takes effect and puts more pressure onto character animation techniques. Some games avoid the issue and go for stylistic animation, but others confront that problem head-on and we've seen huge improvements in animation technology over the past couple years.
There are almost too many improvements to list, but one of those is the use of an IK-based system at runtime that forms the basis of the animation (as opposed to an optional post-process), which provides a lot of flexibility and control to both animators and at runtime.
- Example: Bungie used a Runtime Rig system at the core of DESTINY's animation system to improve its flexibility and memory efficiency.
- References: Destiny's Runtime Animation Rig (PPTX)
- Upcoming: The Character Animation Technology track at the nucl.ai Conference 2015 features keynotes and masterclasses on using reinforcement learning and motion fields (to be soon shipped in a AAA game) to break out of your rigid FSM! Grab tickets or watch the (partial) stream on July 22nd.
3. Raw Computation Power!
The availability of high-performance computation power (specifically, GPUs) has revolutionized AI and machine learning, and we're seeing those trends spread to games too. Multiple domains are already benefiting from additional compute and can always use more: for example sensing, animation, decision-making, pathfinding, etc.
Modern techniques such as deep learning with neural networks and monte-carlo tree search are opening up tremendous opportunities for creating AI that can learn from examples and adapt via search at runtime — assuming some high-level supervision from the AI designer or programmer. It reduces development workloads significantly, and improves the results in-game too!
- Example: Creative Assembly's Campaign AI for the TOTAL WAR series includes an implementation of MCTS, that cleverly explores possible options to find good strategies for creating and coordinating units.
- References: Monte-Carlo Tree Search in TOTAL WAR: ROME II’s Campaign AI
- Upcoming: The Real-time Decisions track at the nucl.ai Conference 2015 will cover Creative Assembly's implementation in more depth, and applications of MCTS to Fable Legends, and of course deep learning for games! Grab tickets or watch the (partial) stream on July 20th.
4. Surviving Environment Hazards
As worlds become more realistic, there's more and more emphasis on AI techniques to be able to cope with the complexity. Over the past decade there has been significant work in the games industry on creating reliable pathfinding from "polygon soup", or efficient sensing that takes into account sufficient graphical detail.
A consequence of this: for every additional piece of detail that goes into the graphics, more resources must go into the representation used by non-player characters — from the navigation mesh to the collision geometry.
- Example: WARFRAME's airborne enemies require navigating through complex procedurally generated levels. In the screenshot above you see a voxel-based pathfinding solution that tries to deal with the complexity efficiently!
- References: Getting off the NavMesh: Navigating in Fully 3D Environments (PDF)
@alexjc More realistic characters need more realistic AI. More complex environments require better motion and pathfinding.— Sean Lindskog (@FiredanceGames) May 1, 2015
5. Rendering Unlocks Opportunities
Recent games on the latest consoles have been able to render significantly more and push boundaries that were previously not possible. In particular, the CPUs and GPUs are now able to render hundreds of characters on screen and simulate thousands, which provides a sense of scale and new opportunities in gameplay as well.
- Example: ASSASSIN's CREED: UNITY supports thousands of pedestrians in the streets of Paris, helping reinforce the impression of a revolution of the masses.
- References: Postmortem: Developing Systemic Crowd Events on Assassin's Creed Unity
- Upcoming: The Crowds & Ambient Life track at the nucl.ai Conference 2015 features Ubisoft Montreal too, as well as key people from the VFX industry, e.g. Pixar and MPC! Grab tickets or watch the (partial) stream on July 20th.
As rendering improves, it opens up such opportunities to AI and simulation that were previously not possible. This also ties into the next topic of making the world seem less static...
6. Lower CPU Overheads
As graphics drivers improve, especially with recent initiatives like DX12 and Vulkan, the CPU requires fewer cycles to perform the same operations. Modern engines tend to shift more of the work off CPUs as well, for example with geometry shaders or instancing.
This is great news for AI and other game logic that traditionally works on the CPU as more of these resources are freed up. Consoles like the XBox One and PS4 have yet to see their CPU usage maximised, and we'll certainly see more from highly optimized PC games too!Thanks to Neil for pointing this one out on Twitter too.
@alexjc most obvious - less CPU overhead of DX12/Vulkan means more cycles for AI et al.— Neil Henning (@sheredom) May 1, 2015
7. Procedural Graphics
Not only do content creation pipelines need to improve to create additional detail, but engines also must adapt to be able to render all this content in realtime. Relying on traditional CPU->GPU pipelines would not provide sufficient detail without large overheads.
The solution is to put procedural techniques within the graphics engine itself at runtime, specifically on the GPU to make things possible that were not previously. Some games have been doing this for years, but it's becoming more and more common, and in larger quantities.
- Example: The WITCHER 3 relies on a GPU-based scattering of vegetation, to create it's realistic environments partly procedurally at runtime.
- References: Landscape Creation and Rendering in REDengine 3
- Upcoming: The Procedural Content Generation track at the nucl.ai Conference 2015 will cover the tradeoffs between tools and runtime proceduralism — using examples from ARMA/VBS. Grab tickets or watch the (partial) stream on July 21st!
8. Living, Breathing Worlds
For worlds to become more realistic, they need not only a high-quality of static geometry and texture, but also various forms of ambient life. Things like flocks of birds, a wide variety of insects, rodents, and other wildlife become a requirement.
That's obviously great news for AI to provide interesting behaviors for the wildlife, but also as an AI director to coordinate all these different species into a consistent and fun experience.
- Example: FAR CRY 4 (the Game/AI Conf. 2014 keynote) renders not only amazing landscape in the Himalayas, but fills them with wild life that makes things interesting.
- References: Vienna Game/AI Conference ’14: Highlights, Photos & Slides
- Upcoming: The Systemic Design & AI Directors track at the nucl.ai Conference 2015 features presentations, case-studies and masterclasses. Grab tickets or watch the (partial) stream on July 22nd!
@alexjc because you have more drawcalls, AI gets more focus because you want to do something interesting with those drawcalls— Sander van Rossen (@logicalerror) May 1, 2015
9. Attracting Interest
Recent studies shown what the industry has always known: quality graphics motivates buyers to spend money on games, whether they realise it or not! This is a great thing for games because it helps broaden the audience and draw in more people, who will also demand better gameplay too (rather vocally).
Specifically, indie game NO MAN'S SKY has attracted a lot of attention from critics and gamers alike for its amazing procedural planets. The game combines amazing looks with deep generative systems under the hood; without the looks, it likely would have not drawn as much interest!
- Upcoming: We're very pleased to welcome Hazel McKendrick at the nucl.ai Conference 2015 to discuss the details behind the procedural systems in No Man's Sky. Grab tickets or watch her on stream on July 21st!
10. Cloud Computing
So what if graphics and rendering takes up the entire local computation budget? More and more games are moving to a persistent online game architecture, with the AI simulated on the server rather than the client. This means hardware advances from desktop PCs can be used on servers too, for accelerating a variety of computations from voxelization, to raytracing, pathfinding, reasoning, decision making, etc.
- Example: Microsoft's FORZA games (both the Horizon and Motorsport series) feature a Drivatar technology that learns to drive by examples based on players, using a form of machine learning which was moved to the cloud.
- Upcoming: The Real-time Decisions track at the nucl.ai Conference 2015 features a track keynote by Jeffrey Schlimmer about the algorithms and design insights behind Drivatar! Grab tickets or watch the (partial) stream on July 20th.
11. Company Investments
In almost all cases, investment in the graphical realism of a game does not preclude research an innovation in other departments. On the contrary, as shown by the many other points above! However, it's not only a theoretical argument, it's true in the case of FINAL FANTASY XV as well...
Square Enix is among a group of very few publishers that has an AI R&D department, doing forward looking projects that integrate into its games. This work involves using modern planners (e.g. STRIPS) for coordinating characters and creating ambient life in the world. This topic has not been explored by many companies so far, and is still highly experimental in academia.
- Upcoming: Hendrik Skubch from Square Enix' research lab in Japan will be presenting at nucl.ai Conference 2015 his work on applying planning technology to Final Fantasy XV. Grab your tickets now or watch the stream on July 21st!
As graphics improve, expectations of the entire game increase. Not just the animation but also the behavior of characters, and whether the world feels alive or not, wildlife crownds. On the technical side, increased realism also means using better technology, procedural techniques on the GPU, and better production pipelines that include smart algorithms too. Improvements in drivers and hardware can also be exploited by artificial intelligence too.
It's a constant battle for resource allocation among AI and graphics in games, but in absolute terms, and for all the reasons above, we're better off now than we ever have been! I for one, welcome our new hyper-realistic characters and the graphics technology behind them. Any progress in video games is great news for AI geeks too!
See you at the nucl.ai Conference (onsite or online) in July then? :-)