Friday, 16 November 2007

Mind AI

Artifical intelligence has been on my mind recently (I'm creating a Flash based RPG in my spare time) and I have been reading an excellent blog (Game AI For Developers) about different aspects of it that I have not come across before. Previously, my AI programming has been limited to finite state machines, pathing problems and line of sight issues. My post on recursive thinking the other day triggered an interesting discussion about how many developers struggle with the concept of recursion and how it can be represented simlpy using a ladder diagram (thanks to Zeroth for his example in the comments there, which is useful for anybody else like me who isn't familiar with the term). Building on this, I have an idea to represent a "mind " in code, using this and another technique which I will discuss below. I find it to be sufficiently plausible enough to work and I think the idea is good enough to share, in case I don't get the time to implement this in any of my own projects.

The basic idea is to simulate how babies see the world and learn from it. When each of us are born, we have a relatively clean slate* in terms of what we know. How does it gather, and represent information as concepts so that it can understand the world it is in? Let's take the example of baby's experience with a teddy bear. The baby likes the teddy bear because it feels soft when it touches it. Softness, we could say, provides pleasure through its sensory interaction with the baby. What we have in this example is a string of information here, starting at the teddy bear and eventually mapping to the feeling of pleasure, something like this...

Teddy Bear -> Soft -> Pleasure

So eventually, the conceptual mappings lead to a pleasurable experience of touchign a teddy bear because of its softness. This will then become a memory that informs the baby's decision making in future. The baby will have learned that the teddy bear is a nice thing to touch and will touch it in future to experience a form of mild pleasure or comfort. In technical terms, pleasure becomes a base node that many concepts will map to that the baby experiences. I suggest using fear as another one, because negative pleasure could be stated as pain and the survival instinct will therefore map to a fear emotion.

Mapping information in this way provides symantics, i.e. actual meaning to base data, and even (to some extent) a limited context for understanding a single piece of information. By exploring the mappings (through a recursive function, of course, as you may have to go through an unknown number of levels), the information can be put back together. It could even be how the storage mechanism should work too - when the baby encounters something new, a fuzzy logic routine probably runs to categorise what this new thing is like in order to have some understanding of it.

Such a system as this is highly efficient in terms of storage (the concept of softness can be shared by any number of other concepts) and probably closely resembles the game of association that our own brains make. You can become aware of this when a certain sound, smell or place can suddenly trigger a memory. Also, this design idea would map into an Object Oriented system quite easily.

*Relatively because we all have our preprogrammed DNA that affects how we interpret our sensory information.

1 comment:

JuanPi said...

Again, old difficult problem, no solutions yet.
You will meet the Symbol-grounding problem in no time.
Check Embodied AI.