Open Article

The Awakening of Conscious Bots: Inside the Mind of the 2K BotPrize 2010 Winner

Raúl Arrabales and Jorge Muñoz on October 21, 2010

This in-depth article about CC-Bot2, the winning entry of this year's 2K Bot Prize at CIG '10, was written and contributed by Raúl Arrabales and Jorge Muñoz, also known as the Conscious-Robots team.

Most of current efforts in the development of believable bots — bots that behave like human players — are based on classical AI techniques. These techniques are based on relatively old principles, which nevertheless are being progressively improved or wisely adapted increasing their performance in order to satisfy new game requirements. Taking a different perspective, the approach that we adopted for the design of our bot (CC-Bot2) was rather opposed to this trend. Specifically, we implemented a computational model of the Global Workspace Theory (Baars, 1988), a kind of shared memory space where different agents — that we call specialized processors — can collaborate and compete with each other dynamically (see Figure 1). We believe that applying new techniques from the field of Machine Consciousness might also provide good results, even in the short term.

In this article we briefly describe the design of CC-Bot2, the winning Unreal Tournament bot developed by the Conscious-Robots team for the third edition of the 2K BotPrize. The BotPrize competition is a version of the Turing test adapted to the domain of FPS video games (Hingston, 2009). The ultimate goal of the contest is to develop a computer game bot able to behave the same way humans do. Furthermore, a bot would be considered to pass the Turing test (in this particular domain) if it is undistinguishable from human players.

Figure 1: Global Workspace Model.

CERA-CRANIUM Cognitive Architecture and CC-Bot2

As a result of our research line on Machine Consciousness we have developed a new cognitive architecture called CERA-CRANIUM (Arrabales et al. 2009), which has been the basis for the development of CC-Bot2 (CERA-CRANIUM Bot 2). CERA-CRANIUM is a cognitive architecture, designed to control autonomous agents, like physical mobile robots or Unreal Tournament bots, and based on a computational model of consciousness. The main inspiration of CERA-CRANIUM is the Global Workspace Theory (Baars, 1988). CC-Bot2 is a Java implementation of the CERA-CRANIUM architecture specifically developed for the 2K BotPrize competition.

CERA-CRANIUM consists of two main components (see Figure 2):

  • CERA, a control architecture structured in layers, and

  • CRANIUM, a tool for the creation and management of high amounts of parallel processes in shared workspaces.

As we explain below, CERA uses the services provided by CRANIUM with the aim of generating a highly dynamic and adaptable perception processes orchestrated by a computational model of consciousness.

Figure 2: Overview of CERA-CRANIUM architecture.

Basically, in terms of controlling a bot, CERA-CRANIUM provides a mechanism to synchronize and orchestrate a number of different specialized processors that run concurrently. These processors can be of many kinds, usually they are detectors for given sensory conditions, like the “player approaching detector” processor, or they are behavior generators, like the “run away from that bully” processor.


CERA is a layered cognitive architecture designed to implement a flexible control system for autonomous agents. Current definition of CERA is structured in four layers (see Figure 3): sensory-motor services layer, physical layer, mission-specific layer, and core layer. As in classical robot subsumption architectures, higher layers are assigned more abstract meaning; however, the definition of layers in CERA is not directly associated to specific behaviors. Instead, they manage any specialized processors that operate on the sorts of representations that are handled at that particular level, i.e. physical layer deals with data representations closely related to raw sensory data, while mission layer deals with more high-level task-oriented representations.

CERA sensory-motor services layer comprises a set of interfacing and communication services which implement the required access to both sensor readings and actuator commands. These services provide the physical layer with a uniform access interface to agent’s physical (or simulated) machinery. In the case of CC-Bot2, the CERA sensory-motor layer is basically an adaptation layer to Pogamut 3.

CERA physical layer encloses agent’s sensors and actuators low-level representations. Additionally, according to the nature of acquired sensory data, the physical layer performs data preparation and preprocessing. Analogous mechanisms are implemented at this level with actuator commands, making sure for instance that command parameters are within safety limits. The representation we have used for sensory data and commands in CC-Bot2 physical layer is, in most of the cases, actually that of Pogamut 3, like “player appeared in my field of view” or “I am being damaged”.

CERA mission-specific layer produces and manages elaborated sensory-motor content related to both agent’s vital behaviors and particular missions (in the case of a deathmatch game the mission is relatively clear and simple). At this stage single contents acquired and preprocessed by the physical layer are combined into more complex pieces of content, which have some specific meaning related to agent’s goals (like “this player is my enemy” or “enemy x is attacking me”). The mission-specific layer can be modified independently of the other CERA layers according to assigned tasks and agent’s needs for functional integrity.

CERA core layer, the highest control level in CERA, encloses a set of modules that perform higher cognitive functions. The definition and interaction between these modules can be adjusted in order to implement a particular cognitive model. In the case of CC-Bot2, the core layer contains the code for the attention mechanism (many other modules could be added in the future). The main objective of these core modules is to regulate the way CERA lower layers work (the way specialized processors run and interact with each other).

Physical and mission-specific layers are characterized by the inspiration on cognitive theories of consciousness, where large sets of parallel processes compete and collaborate in a shared workspace in the search of a global solution. Actually, a CERA controlled agent is endowed with two hierarchically arranged workspaces which operate in coordination with the aim to find two global and interconnected solutions: one is related to perception and the other is related to action. In short, CERA has to provide an answer for the following questions continuously:

  • What must be the next content of agent’s conscious perception?

  • What must be the next action to execute?

Typical agent control architectures are focused on the second question while neglecting the first one. Here we argue that a proper mechanism to answer the first question is required in order to successfully answer the second question in a human-like fashion. Anyhow, both questions have to be answered taking into account safety operation criteria and the mission assigned to the agent. Consequently, CERA is expected to find optimal answers that will eventually lead to human-like behavior. As explained below, CRANIUM is used for the implementation of the workspaces that fulfill the needs established by the CERA architecture.


CRANIUM provides a subsystem in which CERA can execute many asynchronous but coordinated concurrent processes. In the CC-Bot2 implementation (Java), CRANIUM is based on a task dispatcher that dynamically creates a new execution thread for each active processor. A CRANIUM workspace can be seen as a particular implementation of a pandemonium, where daemons compete with each other for activation. Each of these daemons or specialized processors is designed to perform a specific function on certain types of data. At any given time the level of activation of a particular processor is calculated based on a heuristic estimation of how much it can contribute to the global solution currently sought in the workspace. The concrete parameters used for this estimation are established by the CERA core layer. As a general rule, CRANIUM workspace operation is constantly modulated by commands sent from the CERA core layer.

In CC-Bot2 we use two separated but connected CRANIUM workspaces integrated within the CERA architecture. The lower level workspace is located in the CERA physical layer, where specialized processors are fed with data coming from CERA sensor services (Pogamut). The second workspace, located in the CERA mission-specific layer, is populated with higher-level specialized processors that take as input either the information coming from the physical layer or information produced in the workspace itself (see Figure 4). The perceptual information flow is organized in packages called single percepts, complex percepts, and mission percepts.

Figure 4. CERA-CRANIUM bottom-up flow: perception.

In addition to the bottom-up flow involving perception processes, a top-down flow takes place simultaneously in the same workspaces in order to generate bot’s actions. Physical layer and mission-specific layer workspaces include single actions (directly translated into Pogamut commands), simple behaviors, and mission behaviors (see Figure 5).

Figure 5. CERA-CRANIUM top-down flow: behavior generation.

One of the key differences between CERA-CRANIUM bottom-up and top-down flows is that while percepts are being iteratively composed in order to obtain more complex and meaningful representations, high level behaviors are iteratively decomposed until a sequence of atomic actions is obtained. Top-down flow could be considered, to some extent, to be equivalent to behavior trees, in the sense that behaviors are associated to given contexts or scopes. However, the way CERA-CRANIUM selects the next action is quite different, as current active context is periodically updated by the CERA Core layer. At the same time, the active context is calculated based on input from the sensory bottom-up flow. Having an active context mechanism implies that out of the set of possible actions that could be potentially executed; only the one which is located closer to the active context will be selected for execution. In the next subsection, we describe how the behavior of the agent is generated using this approach.

Behavior Generation

Having a shared workspace, where sensory and motor flows converge, facilitates the implementation of the multiple feedback loops required for adapted and effective behavior. The winning simple behavior is continuously confronted to new options generated in the physical layer, thus providing a mechanism for interrupting behaviors in progress as soon as they are no longer considered the best option. In general terms, the activation or inhibition of perception and behavior generation processes is modulated by CERA according to the implemented cognitive model of consciousness. In other words, behaviors are assigned an activation level according to their distance to the active context in terms of the available sensorimotor space. Only the most active action is the one executed at the end of each “cognitive cycle.”

Distance to a given context is calculated based on sensory criteria like relative location and time. For instance, if we have two actions: Action A: “shoot to the left” and Action B: “shoot to the right”, and an active context pointing to the left side of the bot (because there is an enemy there), action A will be most likely selected for execution, and action B will be either discarded or kept in the execution queue (while it is not too old).

Figure 6 shows a schematic representation of typical feedback loops produced in the CERA architecture. These loops are closed when the consequences of actions are perceived by the bot, triggering adaptive responses at different levels.

Figure 6. Different feedback loops produced in the CERA-CRANIUM.

Curve (a) in Figure 6 represents the feedback loop produced when an instinctive reflex is triggered. Figure 6 curve (b) corresponds to a situation in which a mission-specific behavior is being performed unconsciously. Finally, curve (c) symbolizes the higher level control loop, in which a task is being performed consciously. These three types of control loops are not mutually exclusive; in fact, same percepts will typically contribute to simultaneous loops taking place at different levels.

CRANIUM workspaces are not passive short-term memory mechanisms. Instead, their operation is affected by a number of workspace parameters that influence the way the pandemonium works. These parameters are set by commands sent to physical and mission-specific layers from the CERA core layer. In other words, while CRANIUM provides the mechanism for specialized functions to be combined and thus generate meaningful representations, CERA establishes a hierarchical structure and modulates the competition and collaboration processes according to the model of consciousness specified in the core layer. This mechanism closes the feedback loop between the core layer and the rest of the architecture: core layer input (perception) is shaped by its own output (workspace modulation), which in turn determines what is perceived.

The CC-Bot2 Implementation

In the following table some of the main specialized processors implemented in CC-Bot2 are briefly described (note that a number of processors performing the very same task but using different techniques might coexist in the same workspace).

Specialized ProcessorLayerTask
AttackDetectorPhysicalTo detect conditions compatible with enemy attacks (health level decreasing, enemy fire, etc.).
AvoidObstaclePhysicalTo generate a simple avoiding obstacle behavior.
BackupReflexPhysicalTo generate a simple backup movement in response to an unexpected collision.
ChasePlayerMissionTo generate a complex chasing player behavior.
EnemyDetectorPhysicalTo detect the presence of an enemy based on given conditions, like previous detection of an attack and presence of other players using their weapons.
GazeGeneratorPhysicalTo generate a simple gaze movement directed towards the focus of attention.
JumpObstaclePhysicalTo generate a simple jump movement in order to avoid an obstacle.
KeepEnemiesFarMissionTo generate a complex run away movement in order to maximize the distance to detected enemies.
LocationReachedPhysicalTo detect if bot has reached the spatial position marked as goal location.
MoveLookingPhysicalTo generate a complex movement combining gaze and locomotion.
MoveToPointPhysicalTo generate a simple movement towards a given location.
ObstacleDetectorPhysicalTo detect the presence of an obstacle (which might prevent the bot to follow her path).
RandomNavigationPhysicalTo generate a complex random wandering movement.
RunAwayFromPlayersMissionTo generate a complex movement to run away from certain players.
SelectBestWeaponMissionTo select the best weapon currently available.
SelectEnemyToShootMissionTo decide who is the best enemy to attack to.

In our current implementation, specialized processors are created programmatically (see sample code below), and they are also assigned dynamically to their corresponding CERA layer. It is our intention to create a more elegant mechanism for the programmer to define the processors layout (configuration text file or even a GUI).

// ** ATTACK DETECTOR * Generates a BeingDamaged percept
// every time the health level decreases
_CeraPhysical.RegisterProcessor(new AttackDetector());

// ** OBSTACLE DETECTOR ** Generates a Obstacle single percept
// if there is any obstacle in the direction of the movement
_CeraPhysical.RegisterProcessor(new ObstacleDetector());

// ** EMEMY DETECTOR ** Generates a Enemy Attacking
// complex percept every time the bot is being damaged
// and possible culprit/s are detected.
_CeraMission.RegisterProcessor(new EnemyDetector());

Conscious-Robots Bot in action

The following is an excerpt of a typical flow of percepts that ultimately generates the bot’s behavior (see Figure 7):

  1. The processor EnemyDetector detects a new enemy, and creates a new “enemy detected” percept.

  2. The “enemy detected” percept is in turn received by the SelecEnemyToShoot processor, which is in charge of selecting the enemy to shoot. When an enemy is selected, the corresponding fire action is generated.

  3. Two processors receive the fire action, one in charge of aiming at the enemy and shoot, and other that creates new movement actions to avoid enemy fire.

  4. As the new movement actions have more priority than actions triggered by other processors, like the RandomMove processor, these actions are more likely to be executed.

Figure 7. Simplified scheme of percept and action flow in CERA-CRANIUM.

This is a very simple example that how the bot works. However, it is usual to have much more complex scenarios in which several enemies are attacking the bot simultaneously, and the selected target might be any of them. In these cases, the attention mechanism plays a key role. CERA-CRANIUM implements an attention mechanism based on active contexts. Percepts that are closer to currently active context are more likely to be selected and further processed. This helps maintaining more coherent sequences of actions.

Future Work

CC-Bot2 is actually a partial implementation of the CERA-CRANIUM model. Our Machine Consciousness model includes much more cognitive functionality that is unimplemented so far. It is our aim to enhance the current implementation with new features like a model of emotions, episodic memory, different types of learning mechanisms, and even a model of the self. After a hard work, we expect CC-Bot3 to be a much more human-like bot. We also plan to use the same design for other games like TORCS or Mario.

Although CC-Bot2 could not completely pass the Turing test, it achieved the highest humanness rating (31.8%). As of today, the Turing test level intelligence has never been achieved by a machine. There is still a long way to go in order to build artificial agents that are clever enough to parallel human behavior. Nevertheless, we think we are working in a very promising research line to achieve this ambitious goal.


We wish to thank Alex J. Champandard for his helpful suggestions and comments on the drafts of this article.


CERA-CRANIUM: A Test Bed for Machine Consciousness Research
Arrabales, R. Ledezma, A. and Sanchis, A.
International Workshop on Machine Consciousness 2009. Hong Kong. June 2009.
Download PDF
A Cognitive Theory of Consciousness: Cambridge University Press
Baars, B.J. 1988. 
Read More
Towards Conscious-like Behavior in Computer Game Characters
Arrabales, R. Ledezma, A. and Sanchis, A.
Proceedings of the IEEE Symposium on Computational Intelligence and Games 2009.
Download PDF
A Turing Test for Computer Game Bots
Philip Hingston
IEEE Transactions on Computational Intelligence and AI In Games, Vol. 1 No. 3, 2009.
2K BotPrize 2010 Winner Bot: Steps Toward Passing the Turing Test
Muñoz, J., Arrabales, R. et al.

Discussion 2 Comments

michalb on December 10th, 2010

Nice article and interesting architecture. When reading paper "CERA-CRANIUM: A Test Bed for Machine Consciousness Research " I was suprised they did not mention ACT-R ([URL=""][/URL]) or jACT-R ([URL=""][/URL]). ACT-R seems to be quite related to CRANIUM in terms of goals and architecture. See this paper: [URL=""][/URL] (Deconstructing and reconstructing ACT-R: Exploring the architectural space ). It could be interesting to evaluate these architectures against each other. Both of them actually have some already-made coupling with Pogamut - this could speed up the things. Advantege of ACT-R is that it is really well documented and there are several implementations (lisp, python, java). Also after reading this article I got thinking about what does it actually mean to implement a human-like deathmatch character. When you get to the ground you are usually implementing a chunks of behavior that should take effect in certain conditions (e.g. I am being attacked - I'll select combat behavior). Now in order to achieve human-likeness you need two things - you need these chunks of behavior to be "human-like" and you need "human-like" decision making between these behavior chunks. Even if you have simple FSM you can get good performance if each state will be consistent and the behavior fine-tuned (e.g. good pathfinding is a must). Now this is more about parameters and testing than some high level AI science. But, unfortunately, these low-level problems can't be skipped, you need to take care about it at some point (or someone else has to take care for you). When you are ready with low-levels you can get into more interesting things - high level decision making. And I think it is really great that there are projects like CC-Bot2 that tests these advanced cognitive approaches in purposeful and complete implementations. From my point of view, this is where we should dig in right now - get these long-developed cognitive architectures (SOAR, CRANIUM, ACT-R, LIDA, etc.) and get them to control our virtual characters and see what happens. :-) Of course we still need the low-levels working for this to work out. Lets hope that Pogamut is at least a little bit helping you guys interested in this. Ok, enough of my babble. :-) Best, Michal PS.: We would be really gratefull if anyone who use/refer Pogamut would cite it in his/her paper with this citation: Gemrot, J., Kadlec, R., Bida, M., Burkert, O., Pibil, R., Havlicek, J., Zemcak, L., Simlovic, J., Vansa, R., Stolba, M., Plch, T., Brom C.: Pogamut 3 Can Assist Developers in Building AI (Not Only) for Their Videogame Agents. In: Agents for Games and Simulations, LNCS 5920, Springer, 2009, pp. 1-15. URL: We are academia-funded and every citation is a big help. Thanks!

jorgemf on December 13th, 2010

Hello Michal, I appreciate your comments and the links to ACT-R architecture (I guess Raul cite the architecture in other articles because he knows it), and I completely agree the idea of testing the cognitive architectures in bots. There will be another botprize next year and a related competition (take a look to [url][/url] ), I encourage you and anyone interested in create a bot for the competitions. We are going to improve our bot and participate again. Last year we started from the scratch and it took us a couple of week to develop the architecture in java and the bot. So I hope to see more competitors next year :-) Futhermore the competitions, there is also another way to compare the cognitive architectures, and any other type, to measure their level of consciousness you can use the ConsScale ([url][/url]). It would be great if we can compare the architectures in a game and also with the scale. Best, Jorge PS.: We will cite pogamut, it is a great tool to develop bots for UT2004

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!