I have been doing some thinking in this area as I am working on making shapes visiable to bots. Today they are not visiable. They block what is behind them, but instead of seeing a shape and being able to know it's distance, that is a shape, etc it appears to the bot that there is simply nothing in that direction even if there is a bot on the other side of the shape that would be within viewing range were the shape not there. So, today, bots can hide behind shapes, but they have no way of knowing that they are doing so or of evolving behaviour to use or interact with shapes.
Today, the only things bots can see are other bots. They can't see shots, field borders, ties, teleporters or shapes. They can assume that if they see something, anything, its a bot. This will and should change. My vision (pun intended) is that the world gets more complex over time and the variety of objects in a sim and thus the variety of things bots can see and hence interact with increases. I want to put aside for a moment questions about how many eyes there should be, whether refvars should work for eyes other than eye5 and so on and focus (pun intended) on the question of how bots should differeniate between the different kinds of objects that come into view because this is likely to be the next area of vision I work on.
In DB, for reasons of practicality, we take short cuts over the route nature took. We could implement photons for example and make vision based upon reflected light and require that bots evolve recognition logic to sense and distinguish the different photon reflection patterns that consitute a bot vs. shape (not to mention a moving bot v. a fixed bot vs. a far away bot etc.). Needless to say, we would be at it a very long time.
So instead, we use eye distance numbers and refvars to boot strap much of the underlying machinery nature had to evolve so that our bots can focus (hee hee) on evolving behaviour that utilizes these built in mechanisms rather than on evolving the mechanisms themsevles.
So, I intend to simply add on to the existing refvar paradym for object type recognition by adding a .reftype sysvar. A reftype of say 0 would indicate that the closest thing visiable in eye5 is a bot, any type bot. Or, we could go finer grain than that and reserve differnt type values for different types of bots e.g. autotroph, nonautotroph, corpse. But my main point is that there would be other values that indicated that what the bot was looking at was something other than a bot. The first one I would implement would be for looking at a shape. We could extend this to include a type for the field border if we wanted bots to see the edges of the world as well as any new kinds of objects we add in the future.
It does mean that existing bots will need to add logic to avoid trying to chase/attack/tie to/run from/swarm with shapes but I don't see a way around that without inventing a new kind of sense. For example, we could add brand new sysvars for echolocation or infrared vision or something as a new sense with it's own set of refvar like mechanisms and not enhance eyes, but that has many new issues not the least of which is that it is a lot of work and the sysvar space is limited. Besides, going the refvar route will only confuse old bots in sims where there are shapes or other, future, not bot things that can be seen, which I think is acceptable.
Comments?