Back in 2003, before the Electronics Entertainment Expo was neutered, I attended a panel discussion anchored by Peter Molyneux (Fable, Black & White), David Jones (GTA, Lemmings), and Will Wright (SimCity, The Sims, Spore). In general, the topic was that of where these three saw interactive entertainment headed. Given their respective positions in the world of gaming at the time, it was a well-attended event - and with good reason. The three chatted amiably for some time about a variety of subjects. Peter showed off some demo material of “The Movies” which was in production at that point. (Interestingly, the user-generated, sharable content in “The Movies” turned out to be somewhat of a forerunner of what Will is doing with Spore.) The banter was light and quick-witted with a delightful blending of their various accents punctuated by chuckles from the audience.
Thankfully, there was sufficient time for a Q&A session and, feeling almost hypnotically drawn to the collective wisdom of the three luminaries on stage, I approached the mic. I took the opportunity to ask a question that I felt was the draw-string that ran through the work of all three of the men. (I seem to have misplaced my audio CD so I paraphrase myself…)
“All three of you are known for work that has embraced free-roaming, open-ended, sandbox worlds with individual agent-based characters interacting with each other. How does it feel to look down at the worlds you created with fascination and excitement - but knowing full well that the interactions and behaviors you are watching could collapse in on itself at any moment? And how do you go about testing in order to try and avoid that problem?”
There was a deafening silence in the room at that point. I wish that I could find my CD so I could time how long it was. All I remember, however, is that the rapid-fire chit-chat was gone and, in the interminable pause that ensued, I wondered briefly if the next sound I was to hear was that of jets of roaring flame accompanying a detached, horrifying voice — “Do not arouse the wrath of the great and powerful Oz!” Instead, I heard one of them — I believe it was Will — say quietly, simply and perhaps with a bit of perturbation. “Good question.”
Photo 1: Peter Molyneux, David Jones and Will Wright at E3 2003.
The others echoed their discomfort at the entire notion. They each went on to discuss how it was, indeed, an unnerving experience to watch the behavior of their synthetic progeny emerge. They spoke of accelerated simulation rates and automated testing - even to the point of coming back in the morning to see if the simulation was still within the bounds of sensibility. However, I found myself not listening entirely. Something was tickling at my thoughts. I knew I had touched upon something sacred. These men were arrayed before us because they were pushing the envelope of simulation and behavioral AI. We all knew that. However, I had just discovered the boundary that they were pushing. How does one create chaos without having it turn into… well… chaos?
A Love/Hate/Fear Relationship
Emergent behavior is a sensitive topic in AI circles. Many people want it, many people fear it, and certainly everyone talks about it. The primary argument against emergent behavior is that it is unpredictable. It is an expression of chaos in a world where designers want predictability and control. Or is it really?
Many of us have heard of “Chaos Theory“. It was even implicated in the movie “Jurassic Park” as the explanation for why the purportedly safe experiment went to heck. However, what most people don’t know or understand is that the concept of Chaos Theory has nothing at all do with chaos. Whereas chaos is a complete lack of order that defies any rule set or algorithmic explanation, Chaos Theory is entirely deterministic. It is a system where every single interaction is based on a defineable action/reaction pair. It is inextricably linked to cause and effect - only at the most minute, if not atomic scale. If a system that exemplifies Chaos Theory looks random, you simply haven’t looked far enough, deep enough or long enough to determine the precise constellation of factors that are in effect.
Much of what we call emergent behavior in the game industry is based on the same premises of Chaos Theory. If our agent systems are entirely deterministic - that is, they are based on a finite set of distinct “rules of engagement” - then, given the starting parameters and any additional inputs throughout (such as the player’s actions) , we can predict exactly what will transpire. Ironically, because we can’t possibly approach all the possible parameters that would be in play in the real world, we must apply random (yes, I know… pseudo-random) noise to our otherwise deterministic models in order to make them look more “real” — more like the entirely deterministic model we are trying to emulate.
Modeling Randomness With Randomness
For example, when determining a shooter’s actual impact point for his shot, we can’t model all the variances in wind speed, direction, temperature, humidity, bullet weight, explosive thrust, the muscle tremors in his arms and hands and even the blood pulse in his hand on the stock of the gun. And I’m sure I forgot a few. However, we can do a pretty smashing job of determining an exactly straight line from the barrel of a gun to the midpoint of the player’s chest. So, in a nod of deference to the mysteries of mathematical chaos that should be in play, we add some parametric noise that don’t even attempt to model the above criteria. More likely we just apply some stripped down Gaussian distribution and move on.
The same can be said for physics engines. The rock isn’t smooth not of perfectly equal mass distribution. The surface it is bouncing off of isn’t exactly smooth nor does it have a perfectly distributed elasticity. Aw, what the heck — toss in a random number!
“Aw, what the heck — toss in a random number!”
Exactly which muscles, arteries, veins, nerves, and bones got hit by the sword? Angle of the limb at impact? What was the force of the impact? That would be affected by the sharpness of the blade so the force would be measured in terms of the square inch/millimeter. But the speed of travel of the blade would have to do with the distance along the blade from the rotation point and the resultant angular velocity. And how sharp was the blade at that particular impact point? How many times had it struck armor or shield right there? Is it duller there than father toward the hilt? Oh never mind. Roll a d20+5 and be done with it.
So… to sum up, in trying to model a process that isn’t random, but deterministic, we make a deterministic model and add randomness. And we embrace that! Rocks bouncing randomly around the ground looks cool! Bullets missing the target slightly in a cluster pattern looks real! We accept the idea that the addition of non-sensical data - i.e. the random numbers - is a necessary part of our simulation. It is acceptable for us to give up that control if we are going to model scores of bouncing rocks, hundreds of sword impacts and thousands of bullet holes.
Modeling Behavior Without Randomness
But what about behaviors? The agent-based models that are increasingly popular of late are generally based on a defined rule set. We model, through a variety of techniques, what amounts to a specific list of cause/effect pairs. Every input from the environment is mapped to an output — even if it is the status quo. Emergent behavior, itself, falls out of this notion. The term “emergent” was originally used by psychologist G. H. Lewes:
“Every resultant is either a sum or a difference of the co-operant forces; their sum, when their directions are the same — their difference, when their directions are contrary. Further, every resultant is clearly traceable in its components, because these are homogeneous and commensurable. It is otherwise with emergents, when, instead of adding measurable motion to measurable motion, or things of one kind to other individuals of their kind, there is a co-operation of things of unlike kinds. The emergent is unlike its components insofar as these are incommensurable, and it cannot be reduced to their sum or their difference.”
That’s a long-winded way of saying “the combinatorial explosion that results from lots of little things working on their own” [Dave Mark circa 2008].
So our agent-based models are really an implementation of Chaos Theory. That is, they are both complex systems that result entirely deterministically from relatively simple models. However, as Jurassic Park so elegantly portrayed for us, even deterministic models can spin wildly out of control. There are plenty of examples of very simple systems whose results can vary widely - almost looking “broken” simply because of the interaction of those simple rules.
“However, as Jurassic Park so elegantly portrayed for us, even deterministic models can spin wildly out of control.”
And that is the rub. That is the ghost in the machine that gave Peter, David and Will pause. That is the beast that waits below the surface to reach up and wrap it’s combinatorial tentacles around our placid simulation and drag it down into the abyss of scathing reviews. And we never know if and when it will strike. Perhaps the name “Chaos Theory”, although not an appropriate term for describing the system itself, was an appropriate one after all for describing the potential results of that system.
So, the question is… how do we rein in the possible out-of-control spiral that could result from what seems to be a decidedly non-stochastic system of simple rules? How do we predict that it may happen so as to avoid even having to rein it in? For that matter, are we so sure that we want to open that Pandora’s box in the first place? But if we don’t embrace Chaos Theory and it’s cousin, emergence, are our games forever stuck in the comfortably safe rut of narrowly defined predictability?