Darwinbots Forum

Code center => Suggestions => Topic started by: Jez on August 22, 2006, 03:31:20 PM

Title: Eyes
Post by: Jez on August 22, 2006, 03:31:20 PM
Do you think that bots would benefit having their eyes changed a bit? The following quotes come from a previous discussion here: (http://www.darwinbots.com/Forum/index.php?showtopic=1466&st=0&gopid=1367940&)
Quote
What I can't do is mimic behaviour of simple things like fish because they have two eyes and bots don't. I can't mimic shoal behaviour for instance only mass following behaviour.
Quote
How about giving bots greater control over eyes, so that, for example, each refvar gets split into 1-9
So *.refeye9 would read back the eye reference of a bot in your eye9
Quote
What I was thinking when I wrote that was more sort of two eyes and being able to choose the location that they are at. The difference between having eyes at side of head or on front of head style.
Quote
Maybe something similar to what I'm doing with ties. Have a command (switcheye or something) that changes which eye has the "focus", and as such changes the info read from the refvars during DNA execution. You could change your "focus" from eye5 to eye3 and back again in a single cycle.
The only possible issue is while this increases potential complexity (which is good) it decreases reaction time that bots have. Already bots can accomplish alot in a single cycle. The few cycles it takes to turn and "lock" onto an opponent are very important to the fitness landscape, and we should be careful before we change it.
Quote
Where the eye's overlap the animal has depth perception, where they can see with only one eye they have no depth perception but can ID objects.
That's what the bots are doing wrong isn't it! They have depth perception through the side eyes but can't ID.
Quote
If we remove the distance part of eye5 and require bots to triangulate the distance to other bots, these are the consequences as I see them.
1. We would need to add some trig functions. Perhaps map the usual 0-360 degrees and -1 to 1 of regular sin and cosine to 0-1256 and -1000 to 1000.
2. Things get more complex for the bots. Specifically finding the distance to the target becomes rather difficult. Finding the velocity even more so.
3. It may not be possible to tell the difference between a really big bot and a very close bot unless you form a multibot and use steroscopic vision.
4. Alot of old bots simply wouldn't work.
I would be willing to make this fundamental change as part of the feature set for 3.0, since upgrading the numbers gives us a legitimate reason not to support older bots  But it would be a huge change.
Title: Eyes
Post by: Numsgil on September 07, 2006, 07:20:38 AM
If we do this we need to do it very carefully.  I've already inputed alot into this topic, so I think others should start speaking up as well.

Exactly what would you change about the way that eyes work that would make them more realistic or whatever your measure of "better" is?
Title: Eyes
Post by: Ramiro on September 14, 2006, 10:39:04 PM
I think that eyes all around of the bot could be interesting. Sometimes could be usefull and sometimes donĀ“t but it would be quite challenging to make a bot take the right course of action when it sees many enemies around.

Bonicular perception could be used to asses distance and eliminate the variables that return the EXACT distance, something that hardly occurs in nature.
Title: Eyes
Post by: EricL on October 20, 2006, 05:01:36 PM
I have been doing some thinking in this area as I am working on making shapes visiable to bots.  Today they are not visiable.  They block what is behind them, but instead of seeing a shape and being able to know it's distance, that is a shape, etc it appears to the bot that there is simply nothing in that direction even if there is a bot on the other side of the shape that would be within viewing range were the shape not there.  So, today, bots can hide behind shapes, but they have no way of knowing that they are doing so or of evolving behaviour to use or interact with shapes.

Today, the only things bots can see are other bots.  They can't see shots, field borders, ties, teleporters or shapes.  They can assume that if they see something, anything, its a bot.   This will and should change.  My vision (pun intended) is that the world gets more complex over time and the variety of objects in a sim and thus the variety of things bots can see and hence interact with increases.  I want to put aside for a moment questions about how many eyes there should be, whether refvars should work for eyes other than eye5 and so on and focus (pun intended) on the question of how bots should differeniate between the different kinds of objects that come into view because this is likely to be the next area of vision I work on.

In DB, for reasons of practicality, we take short cuts over the route nature took.  We could implement photons for example and make vision based upon reflected light and require that bots evolve recognition logic to sense and distinguish the different photon reflection patterns that consitute a bot vs. shape (not to mention a moving bot v. a fixed bot vs. a far away bot etc.).  Needless to say, we would be at it a very long time.

So instead, we use eye distance numbers and refvars to boot strap much of the underlying machinery nature had to evolve so that our bots can focus (hee hee) on evolving behaviour that utilizes these built in mechanisms rather than on evolving the mechanisms themsevles.

So, I intend to simply add on to the existing refvar paradym for object type recognition by adding a .reftype sysvar.  A reftype of say 0 would indicate that the closest thing visiable in eye5 is a bot, any type bot.  Or, we could go finer grain than that and reserve differnt type values for different types of bots e.g. autotroph, nonautotroph, corpse.  But my main point is that there would be other values that indicated that what the bot was looking at was something other than a bot.  The first one I would implement would be for looking at a shape.  We could extend this to include a type for the field border if we wanted bots to see the edges of the world as well as any new kinds of objects we add in the future.

It does mean that existing bots will need to add logic to avoid trying to chase/attack/tie to/run from/swarm with shapes but I don't see a way around that without inventing a new kind of sense.  For example, we could add brand new sysvars for echolocation or infrared vision or something as a new sense with it's own set of refvar like mechanisms and not enhance eyes, but that has many new issues not the least of which is that it is a lot of work and the sysvar space is limited.  Besides, going the refvar route will only confuse old bots in sims where there are shapes or other, future, not bot things that can be seen, which I think is acceptable.

Comments?
Title: Eyes
Post by: Numsgil on October 20, 2006, 07:09:41 PM
Why don't we delve a bit into bot psychology for this.  Right now, for a bot, everything it sees is a bot.  If we add things that the bot can see, to the bot these would be new kinds of bots.

So instead of telling bots that this is a wall, this is a teleporter, etc. we should tell bots that this has so much energy, this has so much body, etc.

Motion is important for real animals when they decide if something is alive (another animal) or not.  There are two types of motion: translational and deformational.  A car moving down a highway has very little deformational motion but alot of translational motion.  It seems to just glide.

Something like a man on a treadmill has alot of deformational motion but little translational.  The man wiggles and squirms but doesn't go anywhere.

So I propose giving bots two sorts of senses.  One detects translational motion (the refvels) the other detects "spinning" motion (a bot spinning looking for food, or an outbound teleportation vortex, etc.).

Using the bots senses, smart bots can distinguish between all the sorts of "bots" in its environment.

Other living bots - They move (either translational if hunting or spinning if idle), have > 0 nrg.

Dead bots - They don't spin and have body > 0 and nrg = 0

Algae - Default algae has refeye = 0, dunno about a good idea for any old plant.

Walls - No motion whatsoever, body = 0, nrg = 0.

Teleporters - body = 0, nrg = 0, lots of spining motion

This way to the bots everything in the world is a bot, some just have different properties.
Title: Eyes
Post by: EricL on October 20, 2006, 08:09:32 PM
Quote from: Numsgil
Right now, for a bot, everything it sees is a bot. If we add things that the bot can see, to the bot these would be new kinds of bots.
This is mostly a historical artifact of coding ease.  Old style walls were implemented as bots, taking up slots in the rob array and so on because it was easy to reuse the code for seeing them, etc.  While this worked, it was a total hack and hell slow.  Bots are heavy weight objects.  Shapes are implemented in their own array with their own properties.  So are teleporters.  So are shots.  If we are not implementing viewable objects as bots, I see little reason why we should try to force fit the "everything is a bot" model.   At some poitn int the future, I hope the number of non-bot objects a bot sees during it's lifetime vastly outnumbers the number of bots it sees.

Quote from: Numsgil
So instead of telling bots that this is a wall, this is a teleporter, etc. we should tell bots that this has so much energy, this has so much body, etc.

This is essentially what I see the refvar method evolving into.  Refvars become the generic properties of any viewable object, whether it is a bot or something else.  They include position, velocity, etc.  Most of what we need is already there.  The existing refvars can be interepted much more genericaly than then they are today without breaking anything.  Shapes have positions.  Shapes have velocity.  If we want to add more properties (e.g. color, reflectivity, rotational motion, texture) we can do so in a generic fashion so that that property can be a property of any viewable object.  I'd even be happy to do away with some of the existing refvars that are totally bot specific and rely on memval and memloc to interrogate the more bot-specific properties of a bot when the object being looked at is a bot.

The reftype sysvar just saves bots from having to code a lot of recognition logic into their genes, recognition logic that has to rely upon generic physical observations    Suppose we make shots visable.  Both shots and bots move.  Which do I chase, which do I flee from?  How do I determine the shot type?  Coding for that just by generic properties would be difficult and complex.  Having a reftype syssvar makes the gene logic for this trivial.  Personally, I'd like to keep the focus on evolving behaviour and less on evolving senses.

Quote from: Numsgil
So I propose giving bots two sorts of senses.  One detects translational motion (the refvels) the other detects "spinning" motion (a bot spinning looking for food, or an outbound teleportation vortex, etc.).

We already have the refvels.  I'm not opposed to adding additional motion descriptor refvars if necessary for describing the complex motion of viewable objects if peopel think it is critical, but motion is only one property of objects and hardly all there is to seeing.  Color for example, could be important as one way a bot remembers a particular shape or another bot.  For example, it might evolve to know that it's home is the yellow shape for example and to turn around if it can no longer see the yellow shape.   I'm all for basing higher level mechanisms on more generic underlying physics, but I don't think we want to reinvent vision at this stage.
Title: Eyes
Post by: Jez on November 09, 2006, 06:20:46 AM
Quote
I'd like to keep the focus on evolving behaviour and less on evolving senses.

They are not part and parcel of the same thing? Most of the senses that have been added to the bots in the past have, IMO, added to the behaviours you could evolve.

I sort of like the direction you guys are going, I have to comment that, to me, everything can be identified using the senses and a series of yes/no questions. Refvars if you like, maybe I can't tell what type the shot is until it hits me but I can tell that it is very small, traveling very fast and heading this way. If some other bot tells me first that it is food then I might stick around and use my touch sense to check this but otherwise I'd leg it.

I started this post originally because I found I couldn't do proper herd behaviour, only follow or clump behaviour. The limit I saw on eyes was that they can only see one thing at any one time. However many extra refvars you add unless you address that problem the behavioural changes they cause will remain limited.
Title: Eyes
Post by: EricL on November 09, 2006, 11:30:30 AM
Quote from: Jez
They are not part and parcel of the same thing? Most of the senses that have been added to the bots in the past have, IMO, added to the behaviours you could evolve.
I think you are misunderstanding me.  I am happy to entertain suggestions that we add eyes all the way around the bot, that we add refvars for every eye so that bots can have 360 degree perephrial vision all in a single cycle.  I'd be happy to add sysvars and code to support other high-level, built-in senses ready and waiting for bot DNA to use, to build into the starting position of every bot the ability to use whatever level of senses we decide we want.

My point is that I want the logic that bots evolve in their DNA to be free to focus as much as possible on complex behaviour that utilizes input from senses and not spend simulation cycles on evolving the senses themselves in the DNA.  Where we want additional high-level senses, we absolutly SHOULD bootstrap the bots and build them in directly into the simulator code in a high-level way, not require that they evolve from lower-level senses in bot DNA.  The latter would take too long.

This issue is a fundemental question, perhaps 'the' fundemental question, that anyone doing an Alife simulator has to ask.  At what "level" do we want evolution to operate?  What things do we build-in from cycle 0 so that every organism just has these them without any evolved genes and what things require evolved logic.  I perfer to focus on evolving complex behaviour, not senses.
Title: Eyes
Post by: Zinc Avenger on November 10, 2006, 08:01:30 AM
Bots should have access to as much information as possible - but I don't believe that they should have any real form of processing done for them, that should all be coded or evolved.

I'd suggest having a separate set of eyes for sensing different objects - keep the eye system the way it is and use those eyes for seeing bots. Add some more eyes for seeing shapes, and more eyes for seeing shots. More eyes for seeing teleporters. And so on. That might over-complicate things a little though.

How about making eyes see everything, but only returning the closest object it can see whether it is shot, bot, shape or whatever? And add a second set of eyes which returns a broad category of what that eye is looking at - bot, shot etc. It might be interesting to see the effect it has on continually-shooting bots - they'd effectively blind themselves whenever they shoot, so perhaps exclude a bot's own shots.
Title: Eyes
Post by: EricL on November 10, 2006, 02:31:36 PM
Quote from: Zinc Avenger
Bots should have access to as much information as possible - but I don't believe that they should have any real form of processing done for them, that should all be coded or evolved.

On this philosophical approach issue, we are in complete agreement.

Quote from: Zinc Avenger
I'd suggest having a separate set of eyes for sensing different objects - keep the eye system the way it is and use those eyes for seeing bots. Add some more eyes for seeing shapes, and more eyes for seeing shots. More eyes for seeing teleporters. And so on. That might over-complicate things a little though.

I strongly disfavor this approach for several reasons including the proliferation of sysvars, the scaling issues with adding mor eeyes for better visual resolution (see below) and the problems with adding new types of objects to the system in the future - rocks, hills, lakes, etc..  While I agree the simulator should handle object class recognition and identification for the bot - I.e. some sysvar value should indicate to the bot that what is sees is a another bot, a shot, a shape, a rock, a pond, etc. - the bot should not have to use different senses to view different types of objects.

Quote from: Zinc Avenger
How about making eyes see everything, but only returning the closest object it can see whether it is shot, bot, shape or whatever? And add a second set of eyes which returns a broad category of what that eye is looking at - bot, shot etc. It might be interesting to see the effect it has on continually-shooting bots - they'd effectively blind themselves whenever they shoot, so perhaps exclude a bot's own shots.

The idea of separating perephral vision and focused vision is a good one and follows nicely from the model we have in place today regarding the special abilities of .eye5.  From both a coding perspective and elagance perspective, I like the notion of a bot being able able to see a little about a lot and a lot about a little and having to 'focus' on what it wants to see a lot about.

Given the sysvar model we have, I cannot immediatly see a programatic way to allow individual eyes to see more than a single object each (the closest object) per cycle.  However, if we want better visual resolution and the ability to see past nearer objects (such as shots the bot just shot) to detect farther away objects, I might suggest adding additional eyes and/or narrowing the field of view for each eye.  More eyes with narrower fields of view equals better resolution and with sufficient resolution, nearby small objects such as shots will not block larger far away objects.

Furthermore, I suggest we make the field of view for each eye and even the direction it is looking subject to evolution by adding additional sysvars for each eye which controls the width of the field of vision and it's angle relative to .aim.  This would allow bots to evolve different visual strategies: bots with great perephral vision that use eyes with wide fields of view all around but which possess limited resolution to see past neaby objects or myopic bots with lousy perephral vision but narrowly focused eyes for better resolution in a particular direction or a combination of each or even the ability to change their visual capabilities from one mode to another as circumstances arise.

As I discuess above in prior posts, I would retain the .eye5 refvar behaviour as the mechanism for gettign details on a viewed object, in particular viewed object identification.  If you just want to know what the closet object is, make the field of view for .eye5 360 degrees and the refvars will represent the properties of the nearest object.  You will know what it is and how close, but not the direction.  If you want to focus on a particular viewed object, just narrow the field of the view of .eye5 as much as you want and point it at the thing you want to view and the refvars will reflect the properties of that object even if it is farther away than others.
Title: Eyes
Post by: Numsgil on November 10, 2006, 10:37:39 PM
I would have all eyes a bot has cover the same angle.  I can see problems otherwise.

Maybe something like this: each bot gets a field of view, centered around its aim.  This field of view is subdivided into n eyes.  Using a command similar to the one I'm developing for ties, a bot can change which eye has the focus during a cycle, which updates the different sysvars.  Maybe you can switch focus only between eyes you designate as focusable eyes.  Eyes without focus can only tell you the distance to the nearest object.

The more eyes a bot has, the more nrg the bot has to pay in upkeep costs.  Focusable eyes cost more to maintain than the peripheral kind.

Turning eyes on, adding eyes, and increasing the field of view costs a significant amount of nrg, which should prevent bots from overly abusing their eyes, and encouraging specialization (narrow lookers for dog fighters, broad vision for grazers).

We can reserve sysvars 500 through 549 for eyes.
Title: Eyes
Post by: EricL on November 11, 2006, 01:34:32 AM
Quote from: Numsgil
I would have all eyes a bot has cover the same angle.  I can see problems otherwise.

Such as?

Quote from: Numsgil
Maybe something like this: each bot gets a field of view, centered around its aim.  This field of view is subdivided into n eyes.  Using a command similar to the one I'm developing for ties, a bot can change which eye has the focus during a cycle, which updates the different sysvars.  Maybe you can switch focus only between eyes you designate as focusable eyes.  Eyes without focus can only tell you the distance to the nearest object.
I'm not sure I know what you mean by "during a cycle".  I think you mean that the bot can change which eye has the focus and update the refvars based on the new focus eye all in the same cycle.  The same can easily be acheived in the model I outline.  The bot could change the field of view and direction of eye5 and the code would update the refvars based on the new direction/focus all in the same cycle.  In fact, this is how it works today I beleive.  eye refvar values are a function of the new .aim value and are updated the same cycle.

I beleive the model I outline is the more general one.  That is, with the exception of changing which eye is used to update the refvars (which could be added but is largly unecessary in my model because the direction of eye5 can be changed, potentially overlapping or subsetting other eyes without the bot turning) what you outline is a special case of what I outline in that in your model the eyes are not movable and have fixed field of view.  Your model has the advantage (as does mine) of allowing bots to get information on the different objects they see without turning by changing the eye used for loading refvars, but unlike mine, does not allow bots to evolve different visual strategies w.r.t. the direction their eyes face and/or evolve strategies to narrow or widen their field of view dynamically as needed.

Quote from: Numsgil
The more eyes a bot has, the more nrg the bot has to pay in upkeep costs.  Focusable eyes cost more to maintain than the peripheral kind.

Turning eyes on, adding eyes, and increasing the field of view costs a significant amount of nrg, which should prevent bots from overly abusing their eyes, and encouraging specialization (narrow lookers for dog fighters, broad vision for grazers).
Eyes corrospond to sysvars.  I do not see an easy way to allow some bots to have more eyes than others unless we add additional eye enable/disable sysvars, which personally, I don't favor for several reaons amoung them that a single point mutation can cause a bot to be charged for something it does not really use.  I similarly dislike the idea of trying to determine which eyes a bot is using and how as part of DNA flow by trying to catch sysvar reads in order to charge more or less as that is extremely problematic given the various ways (that you recently pointed out in another thread) that exist to address sysvar locations.

In my model, all bots have the same number of eyes, but how they use them, wide or narrow and where they point them, ahead, to the sides, all around, behind them, wide and overlapping or narrow with gaps between them, or spinning around like a lighthouse beam is completly under the control of selection.  If we wanted to charge bots for changing how their eyes are used (something I don't advocate but am not opposed to) we could choose to charge for sysvar operations such as changing the field of view of an eye or changing it's direction, which would be a a straight forward charge for storing to the corrosponding sysvars.  But the issue of whether we charge for changing how bots use their eyes is IMHO somewhat orthoginal to what the right eye model should be.  

Quote from: Numsgil
We can reserve sysvars 500 through 549 for eyes.

My model requires 3 sysvars for every eye plus whatever refvars we choose to add for describing the properties of objects other than bots.  For each eye, we need one read/write sysvar for the eye's direction relative to .aim, one read/write for the width of the field of view of the eye and one read-only for the eye value itself.

Given that each eye is movable and the width of field of each eye changable, I see no great need to add additional eyes beyond the current nine that we have.
Title: Eyes
Post by: Numsgil on November 11, 2006, 03:45:06 AM
It's late night for me, so excuse me if I start to ramble...

Quote from: EricL
I'm not sure I know what you mean by "during a cycle".  I think you mean that the bot can change which eye has the focus and update the refvars based on the new focus eye all in the same cycle.

Say a bot is focussing on eye3.  It can change the focus to eye5 and the refvars are all updated in the same cycle.  Something like ".eye5 setfocus".  All the refvars are updated to reflect the new visual target in eye5.  The bot continues executing its DNA, then decides to switch to eye6.  It does ".eye6 setfocus" and now its looking through .eye6.  All the refvars get updated to reflect what eye6 sees.  All this happens in the same cycle, while the DNA is still executing.  Different parts of your DNA can see refvars through different eyes in the same cycle.

The idea is that we extend the system set up for using multiple ties in a single cycle to eyes as well.  For this to make any sense at all, you'll need to know what I'm talking about with the new tie paradigm.  It should be in the wiki.

In the present system, you would need to physically turn, which would take a full cycle, if you want to see refvars of what's in eye3.

Quote
Eyes corrospond to sysvars.  I do not see an easy way to allow some bots to have more eyes than others unless we add additional eye enable/disable sysvars

The above system I outlined takes care of this by overloading the existing refvars.  .refnrg would correspond to the nrg of whatever you're looking at with your currently active eye.  As you move your focus around, different data gets loaded into the refvars.  You wouldn't need any additional refvars for 1 eye or 1 million, other than to allow for backwards compatibility with the eye1-9s.

Quote
In my model, all bots have the same number of eyes, but how they use them, wide or narrow and where they point them, ahead, to the sides, all around, behind them, wide and overlapping or narrow with gaps between them, or spinning around like a lighthouse beam is completly under the control of selection.  If we wanted to charge bots for changing how their eyes are used (something I don't advocate but am not opposed to) we could choose to charge for sysvar operations such as changing the field of view of an eye or changing it's direction, which would be a a straight forward charge for storing to the corrosponding sysvars.  But the issue of whether we charge for changing how bots use their eyes is IMHO somewhat orthoginal to what the right eye model should be.

I'm thinking in terms of large creatures here.  Binocular vision is a tradeoff against wider field of view.  We could probably have both, but it would mean additional eyes placed on the sides or even the back of our heads.

We can't move our eyes around our head to change our field of view except through evolution.  This is something that's difficult to do with the bots.  We could have them only be able to change their eyes in the first 100 cycles of birth, or have it change randomly, but that's largely missing the point.  If you let bots actively take control of their eyes, at any time in their life, there needs to be a cost associated with that.  My body can't just grow another eye during a battle for nothing.  Supposing it could grow an eye at all, of course.  That eye takes effort to grow and connect into my brain.

The costs associated with vision are going to be instrumental in how they're used.

Ultimately wether we have 9 very customizable eyes or a greater number of more rigidly defined eyes, is largely a matter of taste.  They're probably effectively the same.  Feel free to cherry pick aspects of mine if they suit your fancy.
Title: Eyes
Post by: EricL on November 11, 2006, 04:19:05 PM
Ah.  I understand what you mean now by 'during a cycle'.

Independent of the eye model, I consider it beyond the scope of what I'm willing to do with the VB version in the short run to allow for multiple refvar updates within a single cycle.

I think we are in agreement on the way refvars should work as a means to gather a lot of information about whatever the bot is focused on I.e. the refvars should reflect the properties of whatever the bot is focused on, be it another bot or something else.  How that focus is achevied, by changing the direction of one eye or by switching from one eye to another is still a point of debate.  

I suggest that only advanced, corridnating multi-bots should be capable of true binocular vision, that we should never allow eyes anywhere on a single bot other than at it's center.  Note that since eyes read back distance, we already have some aspects of paralax vision with single bots (we are lacking depth of field) even when only using a single eye.

I am not suggestting that the number of eyes a bot has is changeable or that it can "grow eyes" as needed.  What I am suggesting is that a bot can look around without turning it's body.

Most biological organisms (that have eyes) can move their eyes independent of their body, including us and there are many bilogocal organisms capable of wide ranges of eye socket movements including some with eyestalks that can cover the full 360 degrees with a single eye.

Because all eyes are at the bot's center, what I propose is much more akin to eyeball movement within a socket, not physical movement of an eye around onto different places on an organism and given that bots are virtual organisms, I see no reason for us to artifically restrict the potential degrees of eye socket movement.  

We can have costs for changing the properties of an eye if people want, the direction it points and it's field of view, but I see such costs as much less critical than you do since I equate these operations to simple eyeball movement within a socket and changes in lens focal length, bascially inexpensive muscle movements for biological organisms.

And I should point out there is a natural cost tradeoff of sorts in being able to change the field of view of an eye.  The wider the field, the more I can see, but the less I know aboat the things I do see I.e. I give up resolution for width of field.  This alone seems sufficient to me to provide selective pressure on eye field of view.
Title: Eyes
Post by: Numsgil on November 11, 2006, 11:06:00 PM
Sounds good.

Multiple sysvar updates in a cycle isn't as hard as you might think.  The trick is really storing all your data in the eyes and loading them into memory as your focus changes.  If changing your focus is a command, it acts much like a store or inc command, in that its changing memory values.

I can probably rig up this part of the system if you don't want to hassle with it.

Are you still restricting eye5 as being the only eye that can receive focus?  My primary concern is providing potentially different costs between peripheral "eyes", that only give distance and no information, and focus eyes, which give more information.  This is sort of moot if only one eye can be used to focus.

Everything else I'm pretty indifferent about.
Title: Eyes
Post by: Zinc Avenger on November 13, 2006, 10:19:51 AM
How about making which eye has focus changeable?

Strip the special status from eye5, and make eye5 the default "focus" eye, the way it currently is, and designate a memory location to hold an identifier to say which of the eyes has focus. So stick a 2 in it, and eye2 has focus. Stick a 9 in it and eye9 has focus. I don't really see why this should have much in the way of prohibitive costs, unless you want to give bots the option of focussing multiple eyes simultaneously.

This also has the benefit of not breaking existing bots
Title: Eyes
Post by: Testlund on November 13, 2006, 12:16:54 PM
The way I see it is the bots are single cellular life forms, and as such it is unrealistic for them to have any vision at all, but within DB it might be difficult to chose another way for them to sense their environment. That's why I have suggested that their vision radius should be decreased so they allmost need to touch another bot to see it. I'm thinking that real bacteria for instance has to be very close to sense something. I think real bacteria swim around searching for food and doesn't discover it until they bump into it.
Title: Eyes
Post by: EricL on November 13, 2006, 01:32:55 PM
There exist photo-sensitive bacteria which have multiple photo-receptors which can sense differences in directional brightness.  Many single celled species use such senses for navigation, so your claim of eyes on single cells being unrealistic is not well supportted.

But that is somewhat beside the point.  I see no strong reasn why DB must parallel biology at all and it is not a naive question to ask why the simulated environment has any relationship to the physical world at all.  We could have chosen a more CoreWars / Avida approach where bot phenotytpes are software artifacts competing in a nonintuitive virtual environment more condusive to pure software entities and wholly divorced from the physcial world and it's physics.  Why didn't we?

The answer of course is that humans are the ones viewing the evolved behaviour and choosing a virtual environment with an intuitive and consistant set of physics means it is easier for us to recognize evolved behaviour.  But this does not mean we must place artifical or biological limits on bot senses as an attempt to imitate biology.

I'm all for encouraging multibot evolution and as such there is a long list of things I would not want to see single bots capable of but we must be realistic w.r.t. available computing power.  If we required many bots to cooperate in order to acheive meaningful vision, it would take a long time to evolve and as I have said many times, I'm primarily interested in evolving behaviour, not senses, so I favor enhancing single bot vision, not degrading it.

Changing subjects somewhat, the suggestion of bots being able to change the focus eye and the suggestions of allowing for bots to change eye direction and field of view are not mutually exclusive.  Likely I will implement all of them over time.
Title: Eyes
Post by: Numsgil on November 13, 2006, 03:45:51 PM
If we were to modify the way the day/night cycles work, so that the amount of ambient light in the sim is near zero at dawn/dusk, and practically zero at night, we could have vision radius decrease to reflect this.

In the middle of the night you would have to get up really close to other bots to see them at all, because there isn't enough light.

This isn't an expensive operation in code at all.  You'd just decrease the vision radius.  We could also tie in the pond mode gradient, so bots in the deeper parts of the sim have the same effect, regardless of what time it is.

Testlund, that would let you run the sims you want, where eyes just really don't work.

The vision radius would need to relate to the cos or sin of the current day/night cycle.  I think it would be pretty easy to code.
Title: Eyes
Post by: EricL on November 13, 2006, 03:50:47 PM
Quote from: Numsgil
If we were to modify the way the day/night cycles work...

A most execellent idea sir and I agree, fairly easy to code.  On my list.
Title: Eyes
Post by: Jez on November 14, 2006, 06:08:15 PM
I like the direction this is going, I have to say I agree with the premise that binocular vision should be the sole preserve of multi bots TBH, much as it pains me to say!

The idea that a bot can use all 9 eyes to see and then has to choose which eye to focus and therefore get information via rather than wasting a cycle to turn and look through eye 5 sounds really cool. I could imagine making, if I ever do, a multibot that had effective binocular vision using that.

Day night cycles affecting vision is also a great idea.

Quote
I see no strong reasn why DB must parallel biology at all
my bots were sharks with frikin laser beams... although pulse lasers seem to be out doing beams now; unlike Elite  
Title: Eyes
Post by: Numsgil on November 14, 2006, 11:22:15 PM
Proper binocular vision's primary purposes aren't needed in current sims, which is a problem if we want to move towards binocular vision.

Basically, bots don't need binocular vision to have deoth perception, since they have a perfect ability to determine distance using a single eye.  Bots don't need binocular summation because if anything is within the bot's vision radius, and it's pointed in the right direction, it can see the object.

About the only reason I can imagine for bots to have binocular vision is determining the structure of another multibot using paralax.   And maybe using binocular vision in the dark could extend your vision radius if we do what I propose above.  That's it.

The problem is basically that current eyes are too powerful.  Dumbing them down is an option, but a drastic one and it tends to go in the oposite direction of most changes.

I don't have a solution, I'm hoping by outlining the problem someone else has an idea?
Title: Eyes
Post by: EricL on November 15, 2006, 12:17:03 AM
I don't see a problem.  Binocular vision is an evolved adaptation, something a small percentage of the 10 million biological species on the planet have evolved as a means to provide distance and depth of field inputs for their brains to utilize.  It's a complex sense, many millions of years in the making, but only because unlike DB, the real world doesn't have an easy way to provide organisms with distance inputs about the things around it.  If it did, binocular vision would not have evolved.

Yes we could simulate the environmental conditions that selected towards this adaptation in the real world or dumb down vision such that the evolution of determining how far away things are via paralax or some other means would convey an advantage but I don't find the goal of evolving such equipment as a means of providing inputs the system can easily provide to be particularly interesting.  Call me repeditive, but I'm interested in evolving complex behaviour that uses such inputs, not the inputs themselves.  And lest I be allowed to dream, perhaps someday, evenutally, evolving complex inteligence.

Any sense, any inputs we can provide to the bots we should provide, not evolve.  It's trivial for the system to provide to a bot directional and distance inputs about the things around it.  It's non-trivial to evolve behaviour that uses that information effectively e.g. hide and ponce or run and hide or coordinate attacks with others or any number of a zillion other interesting and complex things.

Bottom line, I'm interested in evolving non-trival complex behaviour, not senses.
Title: Eyes
Post by: Testlund on November 15, 2006, 08:28:19 AM
Well, I prefer biological realism, but maybe that's just me. I think DB has been developed in that direction quite well. I also agree with letting day/night cycles affect vision. Very good idea! Seems we have the same opinion how DB should be most of the time.  
Title: Eyes
Post by: Jez on November 20, 2006, 11:08:01 AM
What about MB's? I thought one of the long term goals for DB was the evolution of MB's but with single bots having such powerful eyes anyway and no need for binocular vision to gain depth perception perhaps we are unfairly putting MB's in a really bad position.

If you want an evolutionary reason for MB's to exist then they need to have some defining advantage, rather than just being larger and less mobile food trays for smaller quicker bots.

The only time I seriously tried to make an MB was when ties were the most powerful weapon around, I figured that an MB using the max amount of ties it is allowed to have to create itself would make it invulnerable to tie bot hunters. Not that I managed 'cause ties always end up confusing me but the thought was good.

For this reason I am taking the idea of dumbing down uni bot eyes as a simple but effective way of driving the evolution of bots forward.

I am also very interested in creating complex behaviour patterns, perhaps giving all the bots access to all the abilities isn't the best way to do it. Evolution isn't driven by having everything, that just makes you fat and lazy!  
Title: Eyes
Post by: EricL on November 20, 2006, 12:07:40 PM
Something on my list is to add (obviously optional) switches to turn various single bot capabilites off.  For example, I have been considerring an option that would turn off the movement sysvars.  This would mean that bots would have to evolve their own mechanisms for locomotion based on the physics of the system, which would select heavily for multbots.  For example, using ties for swimming or tie torque/tie length and fixing and unfixing to walk.

I could do the same thing for vision if people really wanted I.e. turn off all eyes but .eye5 or turn off the distance values so all you know is that there is something in the eye's field of view, but not how far....

Personally, I can think of many things that would favor the evolution of multibots but I think it may require some tweaks to costs and also increased topological complexity before they really take off.  As to the former, I see cell specialization as a key driver for MB's.  I.e. some cells get good at attacking, others good at storing nrg, others good at defense, etc.  I'm wondering whether we need to tweak costs or add additional ones that favor this I.e. that make it more costly for a single cell to do multiple things that it is for two separate cells to specialize.

As to the latter, I can envision MB's being necessary to climb over obstacles, bridge chasms, fly, swim, move faster, dig deeper  and perhaps more importantly, think better.  A long distance goal of mine is to create morphological artifacts (think nurons) which favor MB's hooking up to be able to think better.  It's not a complete idea yet, but something I would like to explore down the road.
Title: Eyes
Post by: Jez on November 20, 2006, 02:53:27 PM
Wow, I come up with a bright idea and Eric's already so far ahead of me that he's ordering a pint down the local pub!!

I'd love the option to turn depth perception off for uni bots, especially if there was a way for MB's to still get the ability.

I've always wanted MB's to evolve into omethng more viable than web bots or cancerous veg imitators.

You get my full support for adjusting the game anyway you see fit when it comes to emphasising the difference between one cell and multi cell life.  
Title: Eyes
Post by: Numsgil on November 20, 2006, 05:32:10 PM
I think a step in the right direction is charging bots nrg for eyes, ability to move, shooting, etc.  As bots turn off their capabilities to do various things, they are charged less and less for upkeep.

That encourages a high degree of specialization, especially if the reduction in nrg use is non linear.
Title: Eyes
Post by: EricL on November 20, 2006, 06:49:16 PM
Quote from: Jez
I'd love the option to turn depth perception off for uni bots, especially if there was a way for MB's to still get the ability.

Something I would love to see is someone hand-author a mutlibot that uses binocular vision to determine how far another bot is.  No program changes would be needed to do this.  Simply author a bot that divides, ties the two bots together using a known tie length, then ignoring the number that the eyes reads back, uses the relative eye angles on the two bots and a little triganometry to tranangulate on a third viewed bot and track that bot in motion.

This excercise would give us some insite into the complexity it would take to actually have binocular vision evolve.

Quote from: Numsgil
I think a step in the right direction is charging bots nrg for eyes, ability to move, shooting, etc.  As bots turn off their capabilities to do various things, they are charged less and less for upkeep.

I fully agree and this is exactly what I have meant in posts past when I indicated I favorred morphological costs over genotypic costs (such as DNA operations).  IMHO, we shoudl charge bots for what they do, not how they do it.  So I'm all in for, for example, adding costs directly tied to phsyical movement (including eye movement if we want) in addition to the other morphological costs we already have (producing posion, slime, shots, ties, etc).  I'd also like to add costs for reproduction as described in previous threads.

Charging for using (I.e. reading from) specific sysvar values (such as the eye sysvars) that the system is populating anyway is more problematic.  It's easy for me to see how we can charge for writing to a sysvar (and thus charge for the corrosponding/resulting morphological action that implies) but reading from mem locatiosn is harder to catch.   I'm open to suggestions on how we might do this if people really want to say, make a bot with 9 eyes more expensive than one with 3.
Title: Eyes
Post by: Numsgil on November 20, 2006, 07:11:59 PM
The simplest method I can imagine is to start every bot off with no eyes activated.  Activate an eye only if the bot tries to read from its memory location.  Activated eyes cost nrg to upkeep every cycle.  Eyes that haven't been addressed in a while are automatically turned off, and aren't charged anymore (maybe over the length of 1000 cycles or something similar).
Title: Eyes
Post by: EricL on November 20, 2006, 09:49:38 PM
Quote from: Numsgil
Activate an eye only if the bot tries to read from its memory location.
Catching the read is the hard part.  The bot DNA can address the memory location a dozen ways from Sunday

e.g. 100 5 mult inc *

So it's not enough to look for eye statements in the DNA.  In fact, I'm unfortuantly going to have to turn off the existing perf optomization code that doesn't populate the eye values for bots without .eye sysvars.  More than once I've evolved bots that use indrect eye addressing but they are blind because their eyes arn;t getting populated.

I get having explicit sysvars under bot control that enable/disable eyes and then charging for eyes being enabled, whether the bot reads from it or not.

I get charging for writing to sysvars that do something like .up or .eye5dir.

I get making the amount charged partially a function of the value written I.e. 5 .up store costs less than 40 .up store.

I even get adding eye features such as vision range and charging for bots that want to see farther - this is a simple matter of catchign a sysvar write.

I don't get how one charges for reading from a mem location or how we can effeciently decide NOT to popuate certain environmental sysvars such as eyes and refvars.  Doing the work to figure out whether a bot reads from a mem location or not in order to determine whether to do the work to populate it or not or whether to charge for it or not seems to me like a losing perf proposition...
Title: Eyes
Post by: EricL on November 22, 2006, 04:33:13 PM
FYI, as of 2.42.9e, I have now implemented all of the main vision ideas discussed in this thread (with the execption of varying vision distance by daytime/nighttime or other means).

The focus eye (the one used to populate refvars) can now be changed to any eye.

Each eye can be independently aimed without the bot turning.

The field of view for each eye can be independently changed, widening or narrowing the field of view.

This should open up a whole new realm for bot vision.  I have already posted some example genes in the genes repository.  Many other interesting capabilites are possible.

Who says you never get your money's worth?
Title: Eyes
Post by: Jez on November 22, 2006, 05:33:50 PM
Quote from: EricL
Who says you never get your money's worth?

Have you ever played the lottery?

Anyway, if bots can now move their eyes, perhaps a cost for bots turning?
Title: Eyes
Post by: EricL on November 22, 2006, 07:57:34 PM
Quote from: Jez
Have you ever played the lottery?

Anyway, if bots can now move their eyes, perhaps a cost for bots turning?

Lotteries are simply taxes on people who can't do math.....  

I'm all for turnign costs.  I think it may have to wait however until we move to a angular momentum based turning paradym....