Code center > Suggestions
Eyes
EricL:
--- Quote from: Numsgil ---Right now, for a bot, everything it sees is a bot. If we add things that the bot can see, to the bot these would be new kinds of bots.
--- End quote ---
This is mostly a historical artifact of coding ease. Old style walls were implemented as bots, taking up slots in the rob array and so on because it was easy to reuse the code for seeing them, etc. While this worked, it was a total hack and hell slow. Bots are heavy weight objects. Shapes are implemented in their own array with their own properties. So are teleporters. So are shots. If we are not implementing viewable objects as bots, I see little reason why we should try to force fit the "everything is a bot" model. At some poitn int the future, I hope the number of non-bot objects a bot sees during it's lifetime vastly outnumbers the number of bots it sees.
--- Quote from: Numsgil ---So instead of telling bots that this is a wall, this is a teleporter, etc. we should tell bots that this has so much energy, this has so much body, etc.
--- End quote ---
This is essentially what I see the refvar method evolving into. Refvars become the generic properties of any viewable object, whether it is a bot or something else. They include position, velocity, etc. Most of what we need is already there. The existing refvars can be interepted much more genericaly than then they are today without breaking anything. Shapes have positions. Shapes have velocity. If we want to add more properties (e.g. color, reflectivity, rotational motion, texture) we can do so in a generic fashion so that that property can be a property of any viewable object. I'd even be happy to do away with some of the existing refvars that are totally bot specific and rely on memval and memloc to interrogate the more bot-specific properties of a bot when the object being looked at is a bot.
The reftype sysvar just saves bots from having to code a lot of recognition logic into their genes, recognition logic that has to rely upon generic physical observations Suppose we make shots visable. Both shots and bots move. Which do I chase, which do I flee from? How do I determine the shot type? Coding for that just by generic properties would be difficult and complex. Having a reftype syssvar makes the gene logic for this trivial. Personally, I'd like to keep the focus on evolving behaviour and less on evolving senses.
--- Quote from: Numsgil ---So I propose giving bots two sorts of senses. One detects translational motion (the refvels) the other detects "spinning" motion (a bot spinning looking for food, or an outbound teleportation vortex, etc.).
--- End quote ---
We already have the refvels. I'm not opposed to adding additional motion descriptor refvars if necessary for describing the complex motion of viewable objects if peopel think it is critical, but motion is only one property of objects and hardly all there is to seeing. Color for example, could be important as one way a bot remembers a particular shape or another bot. For example, it might evolve to know that it's home is the yellow shape for example and to turn around if it can no longer see the yellow shape. I'm all for basing higher level mechanisms on more generic underlying physics, but I don't think we want to reinvent vision at this stage.
Jez:
--- Quote ---I'd like to keep the focus on evolving behaviour and less on evolving senses.
--- End quote ---
They are not part and parcel of the same thing? Most of the senses that have been added to the bots in the past have, IMO, added to the behaviours you could evolve.
I sort of like the direction you guys are going, I have to comment that, to me, everything can be identified using the senses and a series of yes/no questions. Refvars if you like, maybe I can't tell what type the shot is until it hits me but I can tell that it is very small, traveling very fast and heading this way. If some other bot tells me first that it is food then I might stick around and use my touch sense to check this but otherwise I'd leg it.
I started this post originally because I found I couldn't do proper herd behaviour, only follow or clump behaviour. The limit I saw on eyes was that they can only see one thing at any one time. However many extra refvars you add unless you address that problem the behavioural changes they cause will remain limited.
EricL:
--- Quote from: Jez ---They are not part and parcel of the same thing? Most of the senses that have been added to the bots in the past have, IMO, added to the behaviours you could evolve.
--- End quote ---
I think you are misunderstanding me. I am happy to entertain suggestions that we add eyes all the way around the bot, that we add refvars for every eye so that bots can have 360 degree perephrial vision all in a single cycle. I'd be happy to add sysvars and code to support other high-level, built-in senses ready and waiting for bot DNA to use, to build into the starting position of every bot the ability to use whatever level of senses we decide we want.
My point is that I want the logic that bots evolve in their DNA to be free to focus as much as possible on complex behaviour that utilizes input from senses and not spend simulation cycles on evolving the senses themselves in the DNA. Where we want additional high-level senses, we absolutly SHOULD bootstrap the bots and build them in directly into the simulator code in a high-level way, not require that they evolve from lower-level senses in bot DNA. The latter would take too long.
This issue is a fundemental question, perhaps 'the' fundemental question, that anyone doing an Alife simulator has to ask. At what "level" do we want evolution to operate? What things do we build-in from cycle 0 so that every organism just has these them without any evolved genes and what things require evolved logic. I perfer to focus on evolving complex behaviour, not senses.
Zinc Avenger:
Bots should have access to as much information as possible - but I don't believe that they should have any real form of processing done for them, that should all be coded or evolved.
I'd suggest having a separate set of eyes for sensing different objects - keep the eye system the way it is and use those eyes for seeing bots. Add some more eyes for seeing shapes, and more eyes for seeing shots. More eyes for seeing teleporters. And so on. That might over-complicate things a little though.
How about making eyes see everything, but only returning the closest object it can see whether it is shot, bot, shape or whatever? And add a second set of eyes which returns a broad category of what that eye is looking at - bot, shot etc. It might be interesting to see the effect it has on continually-shooting bots - they'd effectively blind themselves whenever they shoot, so perhaps exclude a bot's own shots.
EricL:
--- Quote from: Zinc Avenger ---Bots should have access to as much information as possible - but I don't believe that they should have any real form of processing done for them, that should all be coded or evolved.
--- End quote ---
On this philosophical approach issue, we are in complete agreement.
--- Quote from: Zinc Avenger ---I'd suggest having a separate set of eyes for sensing different objects - keep the eye system the way it is and use those eyes for seeing bots. Add some more eyes for seeing shapes, and more eyes for seeing shots. More eyes for seeing teleporters. And so on. That might over-complicate things a little though.
--- End quote ---
I strongly disfavor this approach for several reasons including the proliferation of sysvars, the scaling issues with adding mor eeyes for better visual resolution (see below) and the problems with adding new types of objects to the system in the future - rocks, hills, lakes, etc.. While I agree the simulator should handle object class recognition and identification for the bot - I.e. some sysvar value should indicate to the bot that what is sees is a another bot, a shot, a shape, a rock, a pond, etc. - the bot should not have to use different senses to view different types of objects.
--- Quote from: Zinc Avenger ---How about making eyes see everything, but only returning the closest object it can see whether it is shot, bot, shape or whatever? And add a second set of eyes which returns a broad category of what that eye is looking at - bot, shot etc. It might be interesting to see the effect it has on continually-shooting bots - they'd effectively blind themselves whenever they shoot, so perhaps exclude a bot's own shots.
--- End quote ---
The idea of separating perephral vision and focused vision is a good one and follows nicely from the model we have in place today regarding the special abilities of .eye5. From both a coding perspective and elagance perspective, I like the notion of a bot being able able to see a little about a lot and a lot about a little and having to 'focus' on what it wants to see a lot about.
Given the sysvar model we have, I cannot immediatly see a programatic way to allow individual eyes to see more than a single object each (the closest object) per cycle. However, if we want better visual resolution and the ability to see past nearer objects (such as shots the bot just shot) to detect farther away objects, I might suggest adding additional eyes and/or narrowing the field of view for each eye. More eyes with narrower fields of view equals better resolution and with sufficient resolution, nearby small objects such as shots will not block larger far away objects.
Furthermore, I suggest we make the field of view for each eye and even the direction it is looking subject to evolution by adding additional sysvars for each eye which controls the width of the field of vision and it's angle relative to .aim. This would allow bots to evolve different visual strategies: bots with great perephral vision that use eyes with wide fields of view all around but which possess limited resolution to see past neaby objects or myopic bots with lousy perephral vision but narrowly focused eyes for better resolution in a particular direction or a combination of each or even the ability to change their visual capabilities from one mode to another as circumstances arise.
As I discuess above in prior posts, I would retain the .eye5 refvar behaviour as the mechanism for gettign details on a viewed object, in particular viewed object identification. If you just want to know what the closet object is, make the field of view for .eye5 360 degrees and the refvars will represent the properties of the nearest object. You will know what it is and how close, but not the direction. If you want to focus on a particular viewed object, just narrow the field of the view of .eye5 as much as you want and point it at the thing you want to view and the refvars will reflect the properties of that object even if it is farther away than others.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version