Coverage
logo-custom

AI and Non-Player Characters Workshop Report

Andrew Armstrong on November 23, 2008

This extensive report was written by Andrew Armstrong (blog), who attended a game AI workshop in the U.K. last week.

AI and Non-Player Characters Workshop Logo

On Friday 14th November Essex University had the AI and Non-Player Characters Workshop, sponsored by the EPSRC AI and Games Network. The day consisted of academics providing information on their research in 20 minute duration workshops, with 15 in total. The slides should be, at some point, available on their site. I took notes, with varying amount of text - the 20 minute time limits making it difficult to get anything in great detail. I’ve linked to the author’s webpages if anything sound interesting - if their slides are not up, it’d be worth contacting them for more information if my notes make it sound interesting.

The morning concentrated more on the psychology and character side of NPC’s. I only managed to make it to the event half way through the second talk, so sadly I missed the host Simon Lucas‘ own talk on NPC AI Challenges for Machine Learning, and the Role of Competitions, and also missed most of Towards more expressive characters: applying cognitive appraisal by Ruth Aylett at Herriott Watt University. the mid-point of the day moved over to RTS, then multi-agent and more competitive games.

Emotions and Synthetic Characters

AI Workshop - Aladdin Ayesh

Photo 1: Aladdin Ayesh explaining character emotions

Alladin Ayesh at De Montfort University talked about Emotions and Synthetic Characters, bringing in the comparison of games to robotics - the need for doing things in real time. On emotion modelling, he uses psychology to sort the emotional side, using behaviorism schools rather then thresholds (which have X number of emotions with threshold values). This is a major problem translating to a computational model, since once we’ve defined what emotions there are, they might overlap but there isn’t redundancy.

For expressing emotions, there are problems with subtlety and extremes - tears of joy, laughing while angry. There is also fidelity vs. perception, where it seems really high fidelity might not be good to have in games - the Wii has a “rounded experience” rather then the high fidelity of the Xbox or PlayStation. Working with AIBO dogs, testing peoples reaction to being told the dog isn’t run by AI but instead a human, some feel cheated but some liked it anyway. Finally, he stated emotional recognition shouldn’t just rely on facial recognition although it has a good grounding in psychology.

Emotional Modelling for Synthetic Characters

AI Workshop - Daniela Romano

Photo 2: Daniela Romano on Emotional Modelling.

Daniela Romano of the University of Sheffield talked about Emotional Modelling for Synthetic Characters, looking at her research into serious games. She stated videogames have human looking characters which lack realistic behavior - the Uncanny Valley of course. Film, on the other hand, immerses a watcher by pre-rendering things and having actors do the voices. Real time is a lot more difficult. For serious games, it is a case of teaching something under a specific emotion so when that emotion occurs again, it will be brought back. Onto intelligence characters, and work done with a test case of a monkey, a simulated character in a program BASIC allows inputs of events and emotion which is tested against personality score + temperament score + social cognitive factors to get a output action. To get this personality, a set of tests are used to make an set of emotional values, which use numbered values and still need work. In addition to this, specific modelling of events (such as “joyous” events) are tied to a certain personality, using 50 human testers to provide the information. She was also asked some questions; first what were videogame missing, which she said cognitive learning. The counter question was, since that is a lot to ask, what can be done in the mean time? Possibly scripted plans outside the game, such as steering behaviors, and making animation abstract from the AI, simplifying it.

This time with feeling….Approaches to more convincing simulated emotional performance in virtual characters

AI Workshop - Jon Weinbren

Photo 3: Jon Weinbren discussing how to allow convincing NPC acting.

Jon Weinbren, from the University of the Creative Arts and Imaginary discussed This time with feeling….Approaches to more convincing simulated emotional performance in virtual characters. Having written for the industry, he immediately states that computer games are the McDonalds of the cultural landscape - satisfying, fun but no nutritional value whatsoever! (He apologises for this after the laughter). From a writing perspective it is genre deficient, over-reliant on combat or competitive sports, has poorly written stories, clichéd characters and bad acting - not just by voiceacting but by the non-player characters as a whole. Believable characters being a core ingredient of drama, emotion is key to achieving that in a character - that you can easily read the emotions.

In real life plays and film, actors can instantly work with emotions, and directors/writers can alter a scene to help. Animators can work on gestures and movement but not in real time. So it looks good, but doesn’t convince - if using the cognitive science approach (of previous speakers) to map rules onto facial expressions and gestures, there is little consensus on the definition and functionality of the emotions since it is such a complex combination of all those parts. Finally, a major problem is portraying drama (like in games) is not life simulation, it’s a medium to convey human experience. Therefore, an alternative starting point - using dramaturgy and acting as a reference rather then “real” human behavior. Due to cheaper costs of motion capture, video equipment and content analysis can work on the practical framework rather then a theoretical one. Can work with an actors performance - how do they train to portray certain emotional experiences? Eventually will get a working set of narrative actions from actors, and apply them to behaviors in games. A question raised at the end asked about Jon contradicting himself saying don’t use cognitive science - so how can you evoke the behavior without modelling? He said to use a script writer, scripting the necessary states.

Planning and Quest Generation

AI Workshop - Richard Bartle

Photo 4: Richard Bartle on planners and quest Generation.

Richard Bartle from the University of Essex did a talk on Planning and Quest Generation, taking on the problem of MMO quest generation. Currently, MMO’s are either procedurally made “Chinese Menu” (select one from each column: Person to give quest, item to get for them, amount of items to get), or more usually, since Chinese Menu ones don’t work well, hand made and tailored. However, an alternative is to have quest givers have goals, ambitions and actions to perform which cause quests to be created. The example given was the ambition of “Gain power”. There would be various means to satisfy this, one being to become the head of some organisation - in this case, we choose a religious organisation, and to specify more, a Druid one.

Now when planning this, it allows you to answer the “Why” (I want to be the leader to gain power), “How” (to be the best candidate, or to make a vacancy), and “Which/what/who” - as variable bindings - which religion to lead, what position the plan is at, but no set end date. This can lead to quests as different as “Giving out leaflets” to “Assassinating the current leader”, or “Bribing people” to achieve the final goal. It can also replan if some major event occurs (in this example, say the king appoints a new Druid leader instead of this NPC. New goal: Get rid of the king?)

AI Workshop - Richard Bartle’s Slide

Photo 5: The hierarchical system employed to have the NPC plan and then give out quests.

Problem is, it takes a lot of planning operators – so is CPU expensive. Too much of it could slow down an MMO server considerably. So, use the past - it contains untapped seams of AI gold (from the days of MUDs and after) – anything that runs on an old system can run as a microprocess today. Hard to get developers to do this – because they like to reinvent the wheel. He pointed towards http://www.skotos.net/articles/DAWNOF.shtml for more info.

He was also asked, how do you sort out plans being solved entirely? By monitoring it, or always having conflict - would also use other things to, not just this one system. He also finally mentioned the moral problems are an interesting side part too.

I Have a Cunning Plan: Putting AI Planners into Game Characters

AI Workshop - John Levine

Photo 6: John Levine on AI planners playing games.

John Levine from the University of Strathclyde talked about planners in I Have a Cunning Plan: Putting AI Planners into Game Characters. First off, he explained there are significant problems getting planners to solve games. His research is on getting the AI to take the seat of the player, treating games as a challenge problem, because while his main research is on real world problems games are great sandboxes. His short description of planning is to have some goals and actions at the AI’s disposal, and planning some chain of future actions to reach the goal. Actions have predonditions and effects, which should be how the world works (and can include duration of events, time, etc.). Planners then search the state space for a plan - an abstract model is used to test a plan. The problem is when you work with an abstract model which isn’t exactly like the full game - this requires a robust execution of plans in a dynamic and uncertain world. One of the assumptions is predictable things happening - so if you also add opponents into the game, need to model some of this.

However, he then went onto his research. They have various levels of planners:

  • Level 1 currently solves some games, such as Sokoban (moving gems to goals), Rush Hour (moving cars out of a grid). These work well - has complete information and no opponents to model.

  • Level 2 is coping with more abstraction of the world - example game Pingus (Lemmings) can solve the first 5 or so levels by planning, using a topological model done with manually placed node points - but automation of the node points will be next.

  • Level 3 adds unknowns, with them using Bruceworld (Die Hard - terrorists holding hostages), where the reactive rules are needed for unknowns such as the terrorists in the room with hostages. Opponents are considered not to change any plan and actions always succeed however.

  • Finally, level 4 adds opponents which have to be modelled in the plan with PacMan and Ms PacMan being good examples (Ms PacMan is non-deterministic) which they will be working on soon.

AI Workshop - John Levine Slide (Sokoban)

Photo 7: Sokoban - solved by level 1 planners.

He was asked about if his work did continual planning - it wasn’t as such, there are horizons for the plans to end, where it is then planned again, but the agent behaviors should be robust enough to cope with any eventualities which disrupt the plan. He was also asked how long it took to reason a plan - it depends on the situations, keeping the elements simple, so the planning is fast and dumb and the fuzzy behaviors are trained offline. To do this, you need to get the planning world as abstract as possible.

Natural Language Engineering: Where are we?

Udo Kruschwitz from the University of Essex discussed Natural Language Engineering: Where are we? - although he does say he’s not here applying this to videogames at all, so it isn’t a game related talk - just a general overview of the field as it is. Going from early NLE’s - ELIZA- which use pattern matching mainly, PARRY - which is a psychopath and so includes some planning, and SHRDLU which adds some grammar stuff, comes the question: Why is it difficult?

Answer: Ambiguity! Bank, book, ruler - lexical ambiguity. Interpretations of the same sentance - “I saw the man with the telescope on the hill”, “Five monkey’s ate three bananas”, “Time flies like an arrow”. It is also not just English, “Lassen Sie uns noch einen Termin ausmachen” - which might be “I want to book an appointment” or “change a future appointment”. Language is changing too - “I want to buy a mobile”. And ill-formed input into NLE’s: “Accomodation office” missing an M, and multilinguality - Hans Hjelm - a name, but also means “His helmet”. There are also always exceptions to the rules too.

This all applies to speech and natural language community. The two are mainly separate research areas, although they do talk usually people work in one or the other. The two main paradigms are deep processing (deep, hand coded rules, knowledge-rich) and statistical processing (shallow, build up of statistical model of language, data rich).

Going onto where the work is now, various “finished things” include processing typed input (spelling corrections, web searches), searching for words (regular expression search), text classification (spam detection…), POS tagging and spoken dialogue systems (phone systems).

More advanced dialogue (talking to a machine) isn’t there yet however. Finally, with an example shown, he said the speech synthesis has become much better, as well as practical speech recognition (how to say a sentence, not just saying it blandly).

Using genetically optimised AI to improve game-playing fun for strategy games

AI Workshop - Christoph Salge

Photo 8: Christoph explaining the different exploits, bugs and unfun elements found by genetically optimised AI’s.

Christoph Salge from the University of Hertfordshire detailed his work on Using genetically optimised AI to improve game-playing fun for strategy games. He helped create the game “Rise of Atlantis”, to research different scientific areas - perception and 3d space, agile develop methods and more relevantly genetically evolving AI, all to maximise the fun for the players.

Problem is, having ideas is easy, implementing them is hard. Evaluate it early fast and cheap (the agile method). Idea was to use an evolving AI to simulate a human player – trying to use exploits and so forth. The evolving AI’s are pretty laughably bad and thus a bit useless to play against, but the final AI’s are playable. Because it is hard to measure “fun” and what parameters of a game are fun, it is easier to measure what settings prevent fun. These include dominant strategies (”Do this and you always win”), inferior choice (a unit you’d never build, or a choice you will never make that simply just makes the game more complicated), repetitive strategy (non-reactive to the environment - do the same things to win regardless of the map) and extremely easy or hard games (difficulty).

The AI in theory evolves to adapt to the game rules - Rise of Atlantis is a turn based strategy game, has several units to control around the world, and several actions these units can do while consuming food and gold - similar to Civilization 4. Based on XML, and can have AI automate it. Researched 4 possibilities - firstly, planning was really, really hard so not finished. The second one was AI Swarm, which was the most successful (because of the similarities with Civ4). Has internal motivators (harvest food if hungry, trade goods with members of the swarm etc.), with no central communication (but, some shared data - like threat maps) but worked together despite being individuals. It easily adapted to changes - not much high level strategy, but creates the most dominant strategies.

The third was AI Councillor - a president, giving resources out. Different councillors (eg: war, unit, expansion, etc.) that get these resources, and react accordingly. Lastly, reactive AI, which was mainly hard coded - threat patterns, resource extraction patterns and so on. A lot of coding effort, a lot of work identifying every situation. Weighted the options available and chose one. To test these, evolution was used. Weighted behaviors (some favouring war, some favouring being civil), had fitness functions and also offspring - failed ones didn’t get offspring.

It worked fairly well, identifying several hard to find exploits due to bugs in the code. For example, selling goods to one destination should lower those goods to a small % of production cost, but the minimum was 1 gold. Corn cost 2 gold, so a massive corn industry made a hugely dominant strategy. Iron weapons were also very expensive vs. benefits, so AI never used them until more bonuses were added. It was also thought humorously that, since crashing the game server meant a rematch (in the evolution battles), some AI’s deliberately did this if losing so they can get through the round and win ;) (the game being unstable was a major problem, thus the rule to restart on a crash). These changes made the game much more fun to play.

Finally, they are now to get AI adaptive learning for the game Civilisation 4. He was asked why the planner didn’t work - he told the undergraduates who wanted to do it, it was massively complex and hard, so it was getting the data in an easy to use way for the planner where it fell down. He also was asked about getting the game source to try planning (the person asking was part of an earlier team who did like planning ;) ) which he responded it’s unstable and unreleased, but contact him and he might be able to sort something.

Group Movement and Unit Selection in RTS Games

AI Workshop - Mike Preuss

Photo 9: Mike Preuss on the solution to group movement - flocking and influence maps.

Mike Preuss from the University of Dortmund detailed the problems and possible solutions to Group Movement and Unit Selection in RTS Games. The main problems detailed for group troupe movements were bottlenecks where groups get stuck, choosing the right path to attack an enemy and reacting to things disrupting that attack, and when you get to that enemy having the right units attacking other units. The game used was a Rock-Paper-Scissors Warcraft 3-like game.

To cover the movement issue, influence maps are used with flocking to map the area control of the land and experiment with better behaviors. From this, two strategies of avoiding a common problem - defensive towers in the middle of a path - were formed. One was to move slowly around the range of the towers (letting the entire group survive), and the other was to group together in attacking the towers, or moving around them. Finally, testing unit versus unit effectiveness, learning Self Organizing Maps (SOM’s) are used to find the most effective combinations. A simulation is used to simply iterate the types against each other nailing down the definite RPS based anti-groupings. All of this training can be done offline with fast simulation of the units if you know the statistics (taking only basic values, not advanced variables like random damage). The effectiveness of the SOM selected groups versus randomly respawning groups were much much more effective, wiping out the random group almost in all cases. It should also be reasonably easy to implement in a game due to it’s low complexity.

Opponent modelling for poker and other games

AI Workshop - Peter Cowling

Photo 10: Peter Cowling on doing opponent modelling in One Card Poker.

Peter Cowling from the University of Bradford explained how his work has started Opponent modelling for poker and other games. Starting with the statement: videogames are interactive! He detailed there is a lot of work to go - Facade is a good example of progress, but still much to do - to determine if something is intelligent it generally means interacting with it (Turning definition). For his research One-Card Poker was used. A 10 card deck, 2 or 4 players, get one card, gamble money (most of the good bits of poker). Not quite a full videogame, but started with a hard problem previously which was very hard to get learning iterations for - doing it for a simple problem, if it works, can then move onto much harder problems.

A simple AI player will bet if it thinks it will win, and fold if it thinks it won’t. Types of simple player: Loose aggressive (bet high on many), loose passive (bet less on any hand), tight aggressive (play the odds, raise on a good hand), tight passive (rarely raises pot with good hand). After asking the audience what we thought won the most, the answer was obvious - the Tight Passive wins a lot of the time – it pays to play dull.

NeuroEvolutionary Opponent Modelling used – NEAT box with inputs of the last opponent actions, the amount of money they have, and the card they have. Gives a desired action when this information is provided. After a few hundred games, it is found it is very decisive (not 0.33, 0.33, 0.33 but one very deliberate choice). Against another AI player, tight passive was the only one without 100% wins for the tactic on average, the others had enough iterations to get to a good win %. A single neural network can beat all of these players taken in turn in fact. Using Bayes’ Theorem, to consider a strategy A with action B, the probability that an agent is following a given strategy from their actions is predicted. We get data from 100,000 rounds of games, to see the amount of times the player’s fold, check/call or bet/raise. Add this information into the NEAT inputs, to predict if a person is in a certain category. Is it a useful addition to the model? Or is it already in the network as it stands? It does add significant value. For multiple players, NEAT works once for each player then outcomes with an action. Bayes’ helps a lot too, getting a much better result. Now made checks on playing against different opponent personalities, the amount of buffing done, and when a player changes it’s tactics part way through the game.

AI Workshop - Peter Cowling Slide (Conclusions)

Photo 11: Peter Cowling’s conclusions on One-card poker

He was asked about testing against other learning methods - it worked reasonably well with no testing, and exceedingly well once it had data of what actions it’ll do. Also, he was asked about testing against human opponents - there is no learning yet, but worked reasonably well. Finally, he was asked if it was a good testbed for an AI challenge, to which he responded he’d look into it. He finally mentioned the next AI and Games Network meeting, in Bradford on the 12th of January.

Multi-agent System Games in GOLEM

AI Workshop - Kostas Stathis

Photo 12: Kostas Stathis on GOLEM.

Kostas Stathis from Royal Holloway College detailed Multi-agent System Games in GOLEM. GOLEM is agent technology, and stands for – General Onto-Logical Environment for Multiagent systems. Building a multiagent system based on the environment, with a large number of agents, goals, etc. GOLEM itself is an architecture – will interact with the environment with events, using agent templates, and can scale up to many agents. Uses knowledge, goals, plans to model the mind of agents. It also allows the definition of events, with properties and basically code exactly what happens in boolean logic. There was peer to peer work done to do load distribution – the area is split up, although this means an agents view (due to FOW like viewspace) is hard to put over multiple container views. This was shown as a 2d tile based game example, with FOW and the area split up - when a view of an agent overlaps, it needs to work out both.

Ability versus Enjoyability in DEFCON AI-bots

AI Workshop - Robin Baumgarten

Photo 13: Robin Baumgarten on the DEFCON AI-bot API.

Another AIGameDev contributor Robin Baumgarten did a talk on Ability versus Enjoyability in DEFCON AI-bots. Firstly detailing DEFCON – it’s real time strategy, no resource management – units pre-placed. Writing a bot in the source code is unworkable since you can’t distribute your bot (releasing the source code and selling the game too), so an API was created. The existing AI is a linear finite state machine. Places unit, scouts out, assaults what it finds, strikes, and final moves. However, it cannot learn, is predictable and unflexible. Two layers to the new bot design – Basic Actions (movement, fleet control, synchronizing attacks), Planning (high level actions). Attack synchronisation is important to overwhelm defences at once.

AI Workshop - Robin Baumgarten slide (Defcon AI)

Photo 14: Robin explains that testing against the default AI, a maximum of 75% wins is obtained through training iterations due to strategic vs. tactical decisions.

For planning, has a case base of past data, has a decision tree of plans (with choices “did we win, did we lose”), and execute the plan in game to get the new case data out of it. Final results against the default bot is getting up to a maximum of 75% - these are only strategic actions, the tactical actions are required to raise it any further. Hypothesis of fun – ability vs. enjoyability – a tough, but not too tough bot. In a small sample of novice humans, losing is not fun so they enjoy the original Introversion bot which is easier to beat. Possible suggestion is to have dynamic difficulty since the frustration rose, since the enjoyability fell against the new bot for novices. The final API includes console debugging and timeline in the game to see actions, and has a C++ DLL interface and Java, LUA interfaces too. There is possibly a 2009 competition going to happen. As of writing now, the API is also out.

Conflict mining and plan adaptation in games

AI Workshop - Yiannis Demiris

Photo 15: Yiannis Demiris on the relationship of robotic learning and planning in relationship to videogames.

Yiannis Demiris of Imperial College detailed the work in robotics relating to games with Conflict mining and plan adaptation in games. Some of the robotical work done by his team and in his work is applicable to games - the need to model humans, need to learn from data and need to adapt plans to new situations.

With modelling humans, it needs to model for a robotic wheelchair what the human wants to do. Uses a zone of capability (what can do now), zone of proximal development (what can happen with assistance with an expert). Uses forward models (input: current station, action, gives next state), and inverse models (input: current state, target gives what action needs to be performed). Have used neural networks, Bayesian networks to model this. How it works is the system has inverse model inputting the required action into the forward model to get feedback. To learn new data, have the existing set of actions available put in a situation not currently learnt. To put this to use, a new plan is matched up once enough is learnt.

NPC Opinion Modelling in Fable

AI Workshop - Adam Russell

Photo 16: Adam Russell describing the way NPC AI opinions were constructed for Fable

Finally, the last talk of the day had Adam Russell from the University of Derby talk about NPC Opinion Modelling in Fable, detailing his work at Lionhead on Fable. The philosophy of AI in Fable - certain types of games tend to be roleplaying – adopting a role. How do you have the experience of being a rock god, or a race car driver or a hero? Is there something more then just labelling the player as a certain persona. Need the parts around the player too!

Simulated societies can feel like the player is lonely – the player can wander around inside and is ignored inside it. The society should really provide a mirror instead of a picture so the player can reflect on it. For Fable, the society should be about the you the player. It is there for a player – generalises to a lot of games. Primary axis of identity is being good or evil, and going from farmboy to hero. To help the player, there some secondary parts – character customisation for instance. Primarily though, it tracks statistics of the player to affect the world. Does it have a single opinion throughout the entire society? Or perhaps a cocoffeny of separate opinions based on personal history and didn’t share it?

In fact used a hybrid – some shared states (“How evil am I?”) and individuals can have separate reactions based on their history. Axis’ are good/evil, high/low renown. What kind of reactions do you want to certain combinations? Movie-like reactions – running away from the evil man entering for instance. Not based off any model really. However, don’t want switching instantly once the inputs go over a threshold. This is varied using roulette wheel selection of animations and dialogue – Fable 2 improved the distributions even more.

AI Workshop - Adam Russell slide (Fable’s opinion modelling table)

Photo 17: The Fable morality vs. renown response table (with additional operators affecting opinions in the bottom left).

To get feedback to the player, there were specific things happening when the player does certain actions where he can see if the people like or dislike it. There were a lot of variation of reactions based on the fame and the action – lots of voicechat lines for instance. Getting married added on opinions to the wife, and so on. Sometimes it is all about the player experience – the alternative perspective that intelligence isn’t always what we’re after in game AI.

He was then asked about what went wrong with Fable - and what he’d do better. He said since they mapped out the multidimensional space and filled it with equal content, and the player movement through the space was small, a lot of dialogue never got played. Concentrating in particular areas to get less repetition is a good way to do it. He was asked about using text to speech to remove a limit on the amount of dialogue lines - he said at the time it was impossible (Xbox hardware), and looking at it now might be very different. He also was asked why the visual actions don’t seem to coordinate with the voices sometimes - it was because there were pervasive states - so no one liners and going back to what they were doing. Animation systems have improved to remedy this now though.

Conclusion

So, there we go, most of the day’s sessions, hopefully as unabridged as I can make them and simply being their content, since my thoughts would be pretty inadequate on most of the topics. The network should have slides up at some point too, which is a big bonus considering most of the people were pressed for time talking. I’ll be hopefully at the next one in Bradford - check it out if you can, it should be worth going to.

Discussion 3 Comments

zoombapup on November 23rd, 2008

It annoys me I missed this one. A few of those sessions would have been right up my alley. Having said that, Essex is miles away and I'm horribly busy so maybe its a good thing. Bradford is a lot closer to home :)

kierand on November 24th, 2008

I made it to this one. It was an interesting day, but it would have been a lot better if there was more of an industry presence. I think there were 4 of us, to about 25 academics. Still, I think the network has the potential to bring academic innovation into the industry, which can only be a good thing.

merf on November 24th, 2008

I also made it along to this. There were some interesting topics covered but all the sessions were extremely short meaning that nothing was covered in too much depth. In addition many of the sessions spent a good deal of time covering background material that I would hope most industry professionals and academics would be familiar with. It was also interesting to note the difference in presentation style between the industry bods and the academics. The industry chaps favoured a more free wheeling and pragmatic approach whilst some of the sessions from the academics relied on a lot of dry statistical/theoretical material. Perhaps its just a personal preference but when Im getting a lot of information presented back to back I prefer to be given the high level, less formal, overview and then delve into the details in my own time. Tell me something worked and Im happy to not have it rigorously proven to me there and then :) It is always going to be tough to strike a happy medium here where both parties are getting something valuable from each other, but its good to see some first steps being made

If you'd like to add a comment or question on this page, simply log-in to the site. You can create an account from the sign-up page if necessary... It takes less than a minute!