The Octagon Theory (TOT) is a two-player board game in which the goal is to push the opponents pieces off the edges of an octagonal field or in the hole in the middle. For a game with few rules it is surprisingly complex and offers strategic depth which makes it challenging to master for an AI opponent.
With the advent of mobile gaming and the popularity of board games on this platform the need for efficient AI for such games has increased in the last years. Board games are an interesting field because of their discrete nature and the often immense branching of their search tree which makes them hard to solve by brute force. Learn more about the state of the art in this field in our Monte Carlo-Tree Search Article.
We sat down with Daniel Huffman, the inventor and programmer behind the game, to talk about his approach for the computer opponent which is able to challenge even experienced players.
Q: Can you describe the gameplay of the octagon theory and the goals you had in its design?
Daniel Huffman (DH):
The Octagon Theory (TOT) was very loosely inspired by the Japanese sport of Sumo where the goal is to knock one's opponent to the ground or push the opponent out of the ring. So TOT is a two-player 'pushing' game.
You and your opponent basically take turns rotating and placing directional pieces of various strength on the board attempting to knock the opponent's pieces off the board. When the pre-decided number of game turns counts down to zero, the player with the most pieces on the board is the winner.
TOT gameplay. Red Player places a piece and pushes one of his blue opponents off the board.
Along with the Sumo inspiration it was very important to me for TOT to be a very pure and simple game that didn't need a rule book. A game that could be learned by just watching it being played once or twice. But of course I also wanted it to be deep.
I think most of the classic board games like chess, checkers, backgammon, go, etc. all fit within that simple but deep description. And I think that's the reason that they all have lasted down through the ages. And it is more important to me for TOT to be considered as one of the classics and to be played forever than it is for TOT to become an instant sales success. I want TOT to be my legacy.
Q: Which strategies and ways to play can one apply in this game?
DH: If you play in a purely offensive way you'll probably lose. I consider a defensive/offensive mix the best. Always start by securing the strongest positions on the board. Of course those are the positions that are furtherest from the board edges and the center hole. And instead of always trying to push opponent pieces towards the edges or off the board in one move, try to slowly setup the opponent pieces by slowly forcing them into a pattern which will allow later use of one of your stronger pieces to 'kill' three or more opponent pieces with one move. And timing is important too... don't wait until too late in a game to use the stronger pieces for two reasons...
- You may run out of turns before you can make good use of all your stronger pieces. and
- The board may become too crowded to make effective use of your stronger pieces.
And don't get the impression that placing a piece near the edge is always a bad move. If the pattern of pieces on the board is right, with 'cannoning' you can place your piece in just the right edge position to 'kill' opponent pieces that are located on far edges of the board.
Of course blocking tactics are useful at the right time and with the right pattern.
Also I have to mention that I keep learning new strategies the more I play TOT. The funny thing is I had no idea how deep the strategy could be when I designed the game. It seems to have taken on a life of its own. It's probably because of the addition of the one, two, and four-pieces (directional). The original Apple II version of TOT that I made back in the mid-1980's had only one piece...the non-rotatable 8-piece. And an unlimited number of them. So in the new version of TOT I think strategy depth has increased exponentially.
Since the game is so new I think most players have yet to develop any advanced or deep strategies. But what I have seen is very aggressive gameplay - trying to always push the opponent's pieces off the board, without giving any thought to securing safe positions if it means not being able to attack the opponent. Also timing is wrong - they may wait until too late to use the stronger pieces - the pieces that can push two to eight opponent pieces in one move. And they forget to use piece rotation and 'cannoning' effectively.
Also when playing online I notice that some players will mimic my moves because they think since I designed the game that I must be making the best moves. In this case I'll sometimes make illogical moves just to confuse my opponent and make him/her get hung up on trying to figure out why I made that move.
Developing the AI
Q: How did you approach the development of the AI. Which resources did you use while you where researching your AI and where did you get your inspiration from.
DH: Back in the late 80s when I developed the AI for TOT there was no Internet and I didn't know much about AI (even now I don't). So my inspiration came out of the early Apple II games and what game logic I could find in the few English-language personal computer books and magazines I could get my hands on here in Tokyo. The actual ideas I had about AI came out of simple programing and what one could do with it - mostly comparison, and searching for the highest value in an array.
My actual approach went like this: If I were the computer, where is the best place I'd place a piece? It was just common sense to place a piece where it could kill the highest number of opponent pieces and/or move the opponent pieces into weaker positions - weaker positions being those that are closer to the board's edge or to the center 'black hole'. Also TOT had only one piece - the 8-piece that attacks opponent pieces in all eight directions. And this 8-piece was unlimited in number and required no rotation computation. So the AI was a lot less complicated than the current AI (more about this below).
Q: You might have tried a brute force solution. This naturally leads to the question of the size of the search space. How big is it and is it possible to brute force it in a minmax approach? Did you think about pruning the search space?
DH: I guess I do use a brute force approach. The AI scans every open position on the board and evaluates how much damage it can do from that position, keeps a record of it and moves to the next open position, does the same evaluation. If the AI can do more damage from this new position than it could have done from the previous position, this new position is marked as the potential position to move to. This is repeated until all open positions have been evaluated. Currently I only do a single breadth search. I don't do any 'what if' scenarios, or do any depth comparisons of potential moves to see what consequences a move will have a few turns down the road. The AI is still very basic. But it is a bit surprising at how effective it is. But I still wonder how effective it'd be against a very serious player of perfect information board games. I have never gotten to the point of pruning the search space because my AI is still too simple to require it.
Q: Do you have an evaluation function/weighting scheme and which aspects factor in?
DH: Lots of evaluation and weighting: For example. Aggressive vs defensive play. The AI can be adjusted to get more aggressive (or defensive) at any point in the game. But right now this is a one time only switch. Each position on the board is weighted. Safer positions are given higher values. This value is added to the value computed for each open position depending on how many 'kills' (push opponent pieces off the board) and/or how many 'pushes' (cause opponent pieces to move to a less safe position) will occur if a piece is placed in the candidate position.
Description of evaluation function/weighting scheme
- Scan all positions from top-left to bottom-right.
If the first board position isn't open move to the next position until an open position is found.
- Get board weight for this position then scan in each of the eight compass directions (N, NE, E, SE, S, SW, W, NW) as described in step 4 and 5.
- In the north compass direction look for an opponent piece. If an opponent piece is found and it can be 'killed' (knocked off the board) then add to the board weight the 'Normal Kill' value (e.g 34), If the piece can't be killed but can be 'pushed' to a position with a lower relative (to current value) weighting add +5, if the piece will be pushed to a position with a higher relative weighting add -5. The total of the additions becomes the temporary kill value for this direction.
- Repeat for the rest of the compass directions. Add all the temporary kill values to come up with the total kill value for this position and store the value and the position location for later comparison.
If this total kill value is equal to or higher than the previous kill value, replace the previous total kill value and board position location with the new total kill value and position location. If this total kill value is equal to the previous kill value build a list of equal kill values. The kill value will later be selected randomly from this list.
Go back to step one and repeat for all open board positions. When all positions have been tried, the one with the highest total kill value is selected (or selected randomly from the equal kill value list if it exists), and its board position location is passed to the client which handles the killing/pushing of the opponent pieces. The client then passes the new board state to the AI.
Q: Which heuristics did you use in your AI and which heuristics did you find while you where playing the game yourself.
DH: No heuristics or learning algorithms are used. One thing I learned while playing, that the AI doesn't consider, is strategically blocking opponents' moves. Until recently I only considered moves that could do damage, and if no damage could be done, to place a piece in the safest possible position - far from the edges or center hole or the board. I soon discovered a 3rd way of playing - place pieces in THE positions from which the opponent could do a lot of damage. (This is something I want to put in an updated AI.)
Q: Do you have different difficulty settings for your AI - if so how did you achieve the different behaviors?
DH: I have difficulty levels. But instead of choosing a difficulty, you chose an AI player that is rated as aggressive or defensive with a strength rating on a 1 to 10 scale. My strongest is only rated at 4 as I expect I can make the AI a lot stronger using 'real' AI techniques. And I expect other people, like the people who visit AiGameDev.com, can do a lot better than I can. The weakest AI players use an earlier version of the AI that in its implementation is just not a good as the newer version. On top of that I use weightings to adjust if the AI will chose a safe position that does little damage over a position that does lots of damage. A random factor is also used to decide which it will chose. Also adjusting the weighting the AI gives to kills makes it weaker/stronger. How early and when to use the successively stronger pieces is also taken into consideration.
Q: How many different versions of your AI did you develop? What did not work that you thought would work - were there any surprises?DH: Two major, and many minor versions. The 1st major version was the original AI for the Apple II+ TOT back in the late 80s. It only had to consider one piece to use (the 8-piece) to attack the opponent's pieces. This version was done completely in Applesoft Basic. And there were two surprises:
- It beat me the 1st time (and every time) I played. I could never beat it which I thought was cool, but weird since I was effectively playing against my own thought patterns.
- The computer would take about two minutes to make a move. Way too slooooow.
This led to the 2nd minor version: I kept the game in Applesoft Basic but I redid the complete AI part in Motorola 6502 assembly language. I had to order a book from the USA and teach myself assembly language.
There was one surprise:
It was FAST! Way too fast!. As soon as the human player (me) would make a move. BAM! the AI would make its move. I'd make my next move. then BAM! the computer would make its next move. This was also cool but it didn't make for good game play - no suspense over what the computer would do. So I simply introduced a loop that'd wait for a random number of seconds before the computer would make its move.
The 2nd major version is the current one in which the AI has to consider which piece to use, not only the 8-piece from the original TOT, but also the new 4-piece, 2-piece, and 1-piece. And the rotation of these new pieces. And in addition the AI has to consider that only the 1-piece is unlimited in number.
There are a few of minor versions in the game that just make use of weighting to change the 'personalities' and strength of each AI player. And there are a few unreleased minor versions that dealt mostly with deciding which piece to use while taking into consideration how many of each piece was left. They are unreleased because I didn't think they were very effective.
Testing and Porting
Q: How did you test your AI?
DH: The old Apple II version of TOT could play AI against AI. But it was the same AI just taking turns so it wasn't used for evaluating AI algorithms. But it was sure fun to watch.
The new AI was just playtested by me, some beta testers, and my 90-year-old Japanese Mother-in-law who I refer to as the Chief Beta Tester. She is really sharp and many of the current UI mechanics came from her suggestions. But most importantly, the AI constantly improved due to her year's worth of play-testing. She'd easily beat the early versions of it, find bugs, and point out stupid or illogical moves that the AI would make in certain situations. (The easier AIs even now make some stupid moves). I'd make improvements, give her the new versions, and she kept play-testing. She'd play for hours every day. After about of year of this we decided that the AI was pretty good because my mother-in-law, nor I, could seldom beat the stronger AIs. The strongest AI that shipped with the game was probably the fifth major iteration.
Q: How did you approach the porting from the older version to Corona and the iPhone/iPad versions?
DH: Getting the old Apple II version of TOT to the iPhone/iPad wasn't so much of a port as it was a complete redo because I don't have a digital version of the source code, just an old dot matrix printer printout which I shallowly reviewed to refresh my memory. I then thought about how I could add more strategic depth to the game. That's when I came up with the 1, 2, and 4 pieces that could be rotated. With that new concept in mind I considered porting TOT to the Nokia N-810 which I had at that time. But the dev tools that were available for that platform were not integrated and were overly complicated, and the iPhone was really starting to hit its full stride. So before I even bought an iPhone I read some iOS dev books and liked what I saw...the tools and development environment was tightly integrated.
I then bought an iPhone, signed up for Apple's iOS Developers program, and tried the standard Apple development approach using Objective-C and Cocoa Touch. Obj-C and OOP programming weren't hard but trying to learn the Cocoa Touch library was a bit much. There is just so much to it and it seemed like it'd take forever to learn it well enough to be able to accomplish anything substantial with it. That's when (I don't remember how) I came across Ansca Corona. And it blew my mind. A week after downloading the free version I had a TOT proof of concept running, something I couldn't achieve with Obj-C/Cocoa Touch even after six months of study. A good example of why developing in Corona was so much faster was that you could get an image on the screen with just one line of Corona code while doing the same in Obj-C/Cocoa Touch would take a whole page of code.
The current skin (images) of the iOS version of TOT, the Octagon Theory website, and the on-line, two-player Flash version of TOT were all done by my partner Ben Walker at http://ubacoda.com
Anyway, the whole concept of TOT was in my mind and I pretty much remembered how I coded the old Apple II version so I just started coding away - with no serious design documents or specifications.
I completed TOT after a year and a half of twenty to thirty-hour weeks. About half of that time went into trying different graphical designs (We settled on the design done by my partner, Ben Walker), polishing the UI mechanics, and getting the AI right.
Q: Do you plan further improvements of the AI? What features would you like to add to the AI?
DH: The next AI improvement I want to add is depth search. Currently the AI just uses brute force and makes it's decisions based on the current game state. It doesn't choose the best move based on looking a number of turns ahead at the repercussions caused by each possible move. I'd also like to try dynamic weighting of the board in that each position's weighting adjusts based on the current turn's board pattern. Currently it is static. I also want to try adding an intentional blocking ability - currently it may seem that the AI is blocking your move but that is entirely unintentional and is because of some other calculation the AI has made. And of course I want to try a minmax or negamax-based AI.
“I'd love to have AI vs AI competitions..”
With the above in mind I can provide anyone who is interested with a TOT AI Modders Kit which includes a complete AI server and Lua source code for an example AI player. This can be run on a home computer (Lua and LuaSocket has to be installed) or even on an iPad or iPhone if iLuaBox is installed.
Also included is an TOT AI API document that explains everything and how to design your own custom AI. The included Lua source code for the AI player can be played around with, customized and modified, and then played against to see how your mods perform. Then when you become more advanced you can design your own completely new AI and plug it into the AI server API.
Although every version of TOT has been designed with the ability to access remote AI servers, this feature is disabled for now. But in mid-June I released "TOT AI", a special "AI Modders" version of TOT. This is different from all the other versions of TOT in that the feature that allows the entry of an AI Server IP address is enabled. I have intentionally priced this version very high ($10 USD in iTunes App Store) to keep the average person from buying it (but one guy already bought it, hmmm). TOT AI is available so that TOT AI Modders can easily get their hands on a client that allows testing of a custom AI running on a server. Anyone who is seriously interested in becoming a TOT AI Modder, and is thinking of buying TOT AI, should contact me BEFORE BUYING IT. firstname.lastname@example.org