Thursday, 10 July 2008

Fun and games with designing AI

We have got Magefire to a very basic playable point where we could test it. There were bugs in the engine with it being such early days but on our second playtest, nothing that was a show stopper. In fact a surprising amount of it worked properly and overall the engine is behaving itself.

We were testing line of sight code, initiative unit ordering, the combat system, map shrouding, ranged attacks and summoning of the basic units (various types of Imps) by the wizards, movement including flight and a points system to show who was winning. Also, the "replay mode" was tested as that has been built too, which replays back anything your units could see that occured when it wasn't your turn. Most of this did exactly what we wanted to do which got me thinking about what the next step should be. And, as what can be done at this point is very small in terms of actions (i.e. things such as move around, summon, attack, etc) I figured this was a good time to start on the single player AI (and by extension, the multiplayer independent creature AI).

Whenever you begin developing something from scratch - in any programming language or environment - you are faced with choices. How should the system be designed? Are you really building a system? Object oriented programming teaches us to think abstractly, high level, so we don't get bogged down in implementation factors when considering an overall design. A very useful skill. So I knew that generally the AI has to make decisions based on its goals. Thinking like that freed me up from thinking about FSM's, behaviour trees and so on. Instead I was thinking about what needed to be achieved. I recalled a new scientist article I had read recently where the author was suggesting that what our brains do is take a range of values, pick one and then feed back into it whether it was successful or not by adjusting the weights. Now I know that this is really just a neural network rephrased, but put into it's simplest terms in this way, got me thinking. To do that and support reinforcement learning, you don't actually need a structure as complicated as a neural net. You could have a rules based AI system.

The basic idea is this. Everything stems outwards from the actions that an individual unit can do. Actions include such things as walking around or attacking an enemy. The default weight on these would be zero. However, each action would have a number of requirements that must be met or the action is not considered by the AI system at all. A requirement could be having a ranged attack before the ranged attack action can be considered. Providing the action passes these checks, the AI then runs through a list of modifiers that affect how much priority the action has. A modifier could be, how likely is it that the ranged attack will hurt the enemy and how badly will it. So you have a whole bunch of these things and a simple stack that gets sorted by priority that determines what the unit should do.

Now this won't cover planning or goal oriented behaviours, but would be fine for individual independent units that only need to worry about their own safety. So this is the approach I'm going with and it's mostly done now. So hopefully in the next week we'll be able to stick independent creatures on the map and have their behaviour emerge (e.g. run away when injured, without having the circumstances specifically programmed for, instead a safety influence map gets selected as a target for movement with a higher priority as the creature gets more injured). In the future, the weightings on the modifiers could be in turn influenced by reinforcement learning.

No comments: