Poll

Could bots benefit from having the way their eyes work changed?

Yes
7 (87.5%)
No
1 (12.5%)
Couldn't care less and where's the chocolate?
0 (0%)

Total Members Voted: 8

Author Topic: Eyes  (Read 3492 times)

Offline Jez

  • Bot Overlord
  • ****
  • Posts: 788
    • View Profile
Eyes
« on: August 22, 2006, 03:31:20 PM »
Do you think that bots would benefit having their eyes changed a bit? The following quotes come from a previous discussion here:
Quote
What I can't do is mimic behaviour of simple things like fish because they have two eyes and bots don't. I can't mimic shoal behaviour for instance only mass following behaviour.
Quote
How about giving bots greater control over eyes, so that, for example, each refvar gets split into 1-9
So *.refeye9 would read back the eye reference of a bot in your eye9
Quote
What I was thinking when I wrote that was more sort of two eyes and being able to choose the location that they are at. The difference between having eyes at side of head or on front of head style.
Quote
Maybe something similar to what I'm doing with ties. Have a command (switcheye or something) that changes which eye has the "focus", and as such changes the info read from the refvars during DNA execution. You could change your "focus" from eye5 to eye3 and back again in a single cycle.
The only possible issue is while this increases potential complexity (which is good) it decreases reaction time that bots have. Already bots can accomplish alot in a single cycle. The few cycles it takes to turn and "lock" onto an opponent are very important to the fitness landscape, and we should be careful before we change it.
Quote
Where the eye's overlap the animal has depth perception, where they can see with only one eye they have no depth perception but can ID objects.
That's what the bots are doing wrong isn't it! They have depth perception through the side eyes but can't ID.
Quote
If we remove the distance part of eye5 and require bots to triangulate the distance to other bots, these are the consequences as I see them.
1. We would need to add some trig functions. Perhaps map the usual 0-360 degrees and -1 to 1 of regular sin and cosine to 0-1256 and -1000 to 1000.
2. Things get more complex for the bots. Specifically finding the distance to the target becomes rather difficult. Finding the velocity even more so.
3. It may not be possible to tell the difference between a really big bot and a very close bot unless you form a multibot and use steroscopic vision.
4. Alot of old bots simply wouldn't work.
I would be willing to make this fundamental change as part of the feature set for 3.0, since upgrading the numbers gives us a legitimate reason not to support older bots  But it would be a huge change.
If you try and take a cat apart to see how it works, the first thing you have in your hands is a non-working cat.
Douglas Adams

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7713
    • View Profile
Eyes
« Reply #1 on: September 07, 2006, 07:20:38 AM »
If we do this we need to do it very carefully.  I've already inputed alot into this topic, so I think others should start speaking up as well.

Exactly what would you change about the way that eyes work that would make them more realistic or whatever your measure of "better" is?

Offline Ramiro

  • Bot Neophyte
  • *
  • Posts: 25
    • View Profile
Eyes
« Reply #2 on: September 14, 2006, 10:39:04 PM »
I think that eyes all around of the bot could be interesting. Sometimes could be usefull and sometimes donĀ“t but it would be quite challenging to make a bot take the right course of action when it sees many enemies around.

Bonicular perception could be used to asses distance and eliminate the variables that return the EXACT distance, something that hardly occurs in nature.

Offline EricL

  • Administrator
  • Bot God
  • *****
  • Posts: 2266
    • View Profile
Eyes
« Reply #3 on: October 20, 2006, 05:01:36 PM »
I have been doing some thinking in this area as I am working on making shapes visiable to bots.  Today they are not visiable.  They block what is behind them, but instead of seeing a shape and being able to know it's distance, that is a shape, etc it appears to the bot that there is simply nothing in that direction even if there is a bot on the other side of the shape that would be within viewing range were the shape not there.  So, today, bots can hide behind shapes, but they have no way of knowing that they are doing so or of evolving behaviour to use or interact with shapes.

Today, the only things bots can see are other bots.  They can't see shots, field borders, ties, teleporters or shapes.  They can assume that if they see something, anything, its a bot.   This will and should change.  My vision (pun intended) is that the world gets more complex over time and the variety of objects in a sim and thus the variety of things bots can see and hence interact with increases.  I want to put aside for a moment questions about how many eyes there should be, whether refvars should work for eyes other than eye5 and so on and focus (pun intended) on the question of how bots should differeniate between the different kinds of objects that come into view because this is likely to be the next area of vision I work on.

In DB, for reasons of practicality, we take short cuts over the route nature took.  We could implement photons for example and make vision based upon reflected light and require that bots evolve recognition logic to sense and distinguish the different photon reflection patterns that consitute a bot vs. shape (not to mention a moving bot v. a fixed bot vs. a far away bot etc.).  Needless to say, we would be at it a very long time.

So instead, we use eye distance numbers and refvars to boot strap much of the underlying machinery nature had to evolve so that our bots can focus (hee hee) on evolving behaviour that utilizes these built in mechanisms rather than on evolving the mechanisms themsevles.

So, I intend to simply add on to the existing refvar paradym for object type recognition by adding a .reftype sysvar.  A reftype of say 0 would indicate that the closest thing visiable in eye5 is a bot, any type bot.  Or, we could go finer grain than that and reserve differnt type values for different types of bots e.g. autotroph, nonautotroph, corpse.  But my main point is that there would be other values that indicated that what the bot was looking at was something other than a bot.  The first one I would implement would be for looking at a shape.  We could extend this to include a type for the field border if we wanted bots to see the edges of the world as well as any new kinds of objects we add in the future.

It does mean that existing bots will need to add logic to avoid trying to chase/attack/tie to/run from/swarm with shapes but I don't see a way around that without inventing a new kind of sense.  For example, we could add brand new sysvars for echolocation or infrared vision or something as a new sense with it's own set of refvar like mechanisms and not enhance eyes, but that has many new issues not the least of which is that it is a lot of work and the sysvar space is limited.  Besides, going the refvar route will only confuse old bots in sims where there are shapes or other, future, not bot things that can be seen, which I think is acceptable.

Comments?
Many beers....

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7713
    • View Profile
Eyes
« Reply #4 on: October 20, 2006, 07:09:41 PM »
Why don't we delve a bit into bot psychology for this.  Right now, for a bot, everything it sees is a bot.  If we add things that the bot can see, to the bot these would be new kinds of bots.

So instead of telling bots that this is a wall, this is a teleporter, etc. we should tell bots that this has so much energy, this has so much body, etc.

Motion is important for real animals when they decide if something is alive (another animal) or not.  There are two types of motion: translational and deformational.  A car moving down a highway has very little deformational motion but alot of translational motion.  It seems to just glide.

Something like a man on a treadmill has alot of deformational motion but little translational.  The man wiggles and squirms but doesn't go anywhere.

So I propose giving bots two sorts of senses.  One detects translational motion (the refvels) the other detects "spinning" motion (a bot spinning looking for food, or an outbound teleportation vortex, etc.).

Using the bots senses, smart bots can distinguish between all the sorts of "bots" in its environment.

Other living bots - They move (either translational if hunting or spinning if idle), have > 0 nrg.

Dead bots - They don't spin and have body > 0 and nrg = 0

Algae - Default algae has refeye = 0, dunno about a good idea for any old plant.

Walls - No motion whatsoever, body = 0, nrg = 0.

Teleporters - body = 0, nrg = 0, lots of spining motion

This way to the bots everything in the world is a bot, some just have different properties.

Offline EricL

  • Administrator
  • Bot God
  • *****
  • Posts: 2266
    • View Profile
Eyes
« Reply #5 on: October 20, 2006, 08:09:32 PM »
Quote from: Numsgil
Right now, for a bot, everything it sees is a bot. If we add things that the bot can see, to the bot these would be new kinds of bots.
This is mostly a historical artifact of coding ease.  Old style walls were implemented as bots, taking up slots in the rob array and so on because it was easy to reuse the code for seeing them, etc.  While this worked, it was a total hack and hell slow.  Bots are heavy weight objects.  Shapes are implemented in their own array with their own properties.  So are teleporters.  So are shots.  If we are not implementing viewable objects as bots, I see little reason why we should try to force fit the "everything is a bot" model.   At some poitn int the future, I hope the number of non-bot objects a bot sees during it's lifetime vastly outnumbers the number of bots it sees.

Quote from: Numsgil
So instead of telling bots that this is a wall, this is a teleporter, etc. we should tell bots that this has so much energy, this has so much body, etc.

This is essentially what I see the refvar method evolving into.  Refvars become the generic properties of any viewable object, whether it is a bot or something else.  They include position, velocity, etc.  Most of what we need is already there.  The existing refvars can be interepted much more genericaly than then they are today without breaking anything.  Shapes have positions.  Shapes have velocity.  If we want to add more properties (e.g. color, reflectivity, rotational motion, texture) we can do so in a generic fashion so that that property can be a property of any viewable object.  I'd even be happy to do away with some of the existing refvars that are totally bot specific and rely on memval and memloc to interrogate the more bot-specific properties of a bot when the object being looked at is a bot.

The reftype sysvar just saves bots from having to code a lot of recognition logic into their genes, recognition logic that has to rely upon generic physical observations    Suppose we make shots visable.  Both shots and bots move.  Which do I chase, which do I flee from?  How do I determine the shot type?  Coding for that just by generic properties would be difficult and complex.  Having a reftype syssvar makes the gene logic for this trivial.  Personally, I'd like to keep the focus on evolving behaviour and less on evolving senses.

Quote from: Numsgil
So I propose giving bots two sorts of senses.  One detects translational motion (the refvels) the other detects "spinning" motion (a bot spinning looking for food, or an outbound teleportation vortex, etc.).

We already have the refvels.  I'm not opposed to adding additional motion descriptor refvars if necessary for describing the complex motion of viewable objects if peopel think it is critical, but motion is only one property of objects and hardly all there is to seeing.  Color for example, could be important as one way a bot remembers a particular shape or another bot.  For example, it might evolve to know that it's home is the yellow shape for example and to turn around if it can no longer see the yellow shape.   I'm all for basing higher level mechanisms on more generic underlying physics, but I don't think we want to reinvent vision at this stage.
Many beers....

Offline Jez

  • Bot Overlord
  • ****
  • Posts: 788
    • View Profile
Eyes
« Reply #6 on: November 09, 2006, 06:20:46 AM »
Quote
I'd like to keep the focus on evolving behaviour and less on evolving senses.

They are not part and parcel of the same thing? Most of the senses that have been added to the bots in the past have, IMO, added to the behaviours you could evolve.

I sort of like the direction you guys are going, I have to comment that, to me, everything can be identified using the senses and a series of yes/no questions. Refvars if you like, maybe I can't tell what type the shot is until it hits me but I can tell that it is very small, traveling very fast and heading this way. If some other bot tells me first that it is food then I might stick around and use my touch sense to check this but otherwise I'd leg it.

I started this post originally because I found I couldn't do proper herd behaviour, only follow or clump behaviour. The limit I saw on eyes was that they can only see one thing at any one time. However many extra refvars you add unless you address that problem the behavioural changes they cause will remain limited.
If you try and take a cat apart to see how it works, the first thing you have in your hands is a non-working cat.
Douglas Adams

Offline EricL

  • Administrator
  • Bot God
  • *****
  • Posts: 2266
    • View Profile
Eyes
« Reply #7 on: November 09, 2006, 11:30:30 AM »
Quote from: Jez
They are not part and parcel of the same thing? Most of the senses that have been added to the bots in the past have, IMO, added to the behaviours you could evolve.
I think you are misunderstanding me.  I am happy to entertain suggestions that we add eyes all the way around the bot, that we add refvars for every eye so that bots can have 360 degree perephrial vision all in a single cycle.  I'd be happy to add sysvars and code to support other high-level, built-in senses ready and waiting for bot DNA to use, to build into the starting position of every bot the ability to use whatever level of senses we decide we want.

My point is that I want the logic that bots evolve in their DNA to be free to focus as much as possible on complex behaviour that utilizes input from senses and not spend simulation cycles on evolving the senses themselves in the DNA.  Where we want additional high-level senses, we absolutly SHOULD bootstrap the bots and build them in directly into the simulator code in a high-level way, not require that they evolve from lower-level senses in bot DNA.  The latter would take too long.

This issue is a fundemental question, perhaps 'the' fundemental question, that anyone doing an Alife simulator has to ask.  At what "level" do we want evolution to operate?  What things do we build-in from cycle 0 so that every organism just has these them without any evolved genes and what things require evolved logic.  I perfer to focus on evolving complex behaviour, not senses.
Many beers....

Offline Zinc Avenger

  • Bot Builder
  • **
  • Posts: 56
    • View Profile
Eyes
« Reply #8 on: November 10, 2006, 08:01:30 AM »
Bots should have access to as much information as possible - but I don't believe that they should have any real form of processing done for them, that should all be coded or evolved.

I'd suggest having a separate set of eyes for sensing different objects - keep the eye system the way it is and use those eyes for seeing bots. Add some more eyes for seeing shapes, and more eyes for seeing shots. More eyes for seeing teleporters. And so on. That might over-complicate things a little though.

How about making eyes see everything, but only returning the closest object it can see whether it is shot, bot, shape or whatever? And add a second set of eyes which returns a broad category of what that eye is looking at - bot, shot etc. It might be interesting to see the effect it has on continually-shooting bots - they'd effectively blind themselves whenever they shoot, so perhaps exclude a bot's own shots.

Offline EricL

  • Administrator
  • Bot God
  • *****
  • Posts: 2266
    • View Profile
Eyes
« Reply #9 on: November 10, 2006, 02:31:36 PM »
Quote from: Zinc Avenger
Bots should have access to as much information as possible - but I don't believe that they should have any real form of processing done for them, that should all be coded or evolved.

On this philosophical approach issue, we are in complete agreement.

Quote from: Zinc Avenger
I'd suggest having a separate set of eyes for sensing different objects - keep the eye system the way it is and use those eyes for seeing bots. Add some more eyes for seeing shapes, and more eyes for seeing shots. More eyes for seeing teleporters. And so on. That might over-complicate things a little though.

I strongly disfavor this approach for several reasons including the proliferation of sysvars, the scaling issues with adding mor eeyes for better visual resolution (see below) and the problems with adding new types of objects to the system in the future - rocks, hills, lakes, etc..  While I agree the simulator should handle object class recognition and identification for the bot - I.e. some sysvar value should indicate to the bot that what is sees is a another bot, a shot, a shape, a rock, a pond, etc. - the bot should not have to use different senses to view different types of objects.

Quote from: Zinc Avenger
How about making eyes see everything, but only returning the closest object it can see whether it is shot, bot, shape or whatever? And add a second set of eyes which returns a broad category of what that eye is looking at - bot, shot etc. It might be interesting to see the effect it has on continually-shooting bots - they'd effectively blind themselves whenever they shoot, so perhaps exclude a bot's own shots.

The idea of separating perephral vision and focused vision is a good one and follows nicely from the model we have in place today regarding the special abilities of .eye5.  From both a coding perspective and elagance perspective, I like the notion of a bot being able able to see a little about a lot and a lot about a little and having to 'focus' on what it wants to see a lot about.

Given the sysvar model we have, I cannot immediatly see a programatic way to allow individual eyes to see more than a single object each (the closest object) per cycle.  However, if we want better visual resolution and the ability to see past nearer objects (such as shots the bot just shot) to detect farther away objects, I might suggest adding additional eyes and/or narrowing the field of view for each eye.  More eyes with narrower fields of view equals better resolution and with sufficient resolution, nearby small objects such as shots will not block larger far away objects.

Furthermore, I suggest we make the field of view for each eye and even the direction it is looking subject to evolution by adding additional sysvars for each eye which controls the width of the field of vision and it's angle relative to .aim.  This would allow bots to evolve different visual strategies: bots with great perephral vision that use eyes with wide fields of view all around but which possess limited resolution to see past neaby objects or myopic bots with lousy perephral vision but narrowly focused eyes for better resolution in a particular direction or a combination of each or even the ability to change their visual capabilities from one mode to another as circumstances arise.

As I discuess above in prior posts, I would retain the .eye5 refvar behaviour as the mechanism for gettign details on a viewed object, in particular viewed object identification.  If you just want to know what the closet object is, make the field of view for .eye5 360 degrees and the refvars will represent the properties of the nearest object.  You will know what it is and how close, but not the direction.  If you want to focus on a particular viewed object, just narrow the field of the view of .eye5 as much as you want and point it at the thing you want to view and the refvars will reflect the properties of that object even if it is farther away than others.
Many beers....

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7713
    • View Profile
Eyes
« Reply #10 on: November 10, 2006, 10:37:39 PM »
I would have all eyes a bot has cover the same angle.  I can see problems otherwise.

Maybe something like this: each bot gets a field of view, centered around its aim.  This field of view is subdivided into n eyes.  Using a command similar to the one I'm developing for ties, a bot can change which eye has the focus during a cycle, which updates the different sysvars.  Maybe you can switch focus only between eyes you designate as focusable eyes.  Eyes without focus can only tell you the distance to the nearest object.

The more eyes a bot has, the more nrg the bot has to pay in upkeep costs.  Focusable eyes cost more to maintain than the peripheral kind.

Turning eyes on, adding eyes, and increasing the field of view costs a significant amount of nrg, which should prevent bots from overly abusing their eyes, and encouraging specialization (narrow lookers for dog fighters, broad vision for grazers).

We can reserve sysvars 500 through 549 for eyes.
« Last Edit: November 10, 2006, 10:38:45 PM by Numsgil »

Offline EricL

  • Administrator
  • Bot God
  • *****
  • Posts: 2266
    • View Profile
Eyes
« Reply #11 on: November 11, 2006, 01:34:32 AM »
Quote from: Numsgil
I would have all eyes a bot has cover the same angle.  I can see problems otherwise.

Such as?

Quote from: Numsgil
Maybe something like this: each bot gets a field of view, centered around its aim.  This field of view is subdivided into n eyes.  Using a command similar to the one I'm developing for ties, a bot can change which eye has the focus during a cycle, which updates the different sysvars.  Maybe you can switch focus only between eyes you designate as focusable eyes.  Eyes without focus can only tell you the distance to the nearest object.
I'm not sure I know what you mean by "during a cycle".  I think you mean that the bot can change which eye has the focus and update the refvars based on the new focus eye all in the same cycle.  The same can easily be acheived in the model I outline.  The bot could change the field of view and direction of eye5 and the code would update the refvars based on the new direction/focus all in the same cycle.  In fact, this is how it works today I beleive.  eye refvar values are a function of the new .aim value and are updated the same cycle.

I beleive the model I outline is the more general one.  That is, with the exception of changing which eye is used to update the refvars (which could be added but is largly unecessary in my model because the direction of eye5 can be changed, potentially overlapping or subsetting other eyes without the bot turning) what you outline is a special case of what I outline in that in your model the eyes are not movable and have fixed field of view.  Your model has the advantage (as does mine) of allowing bots to get information on the different objects they see without turning by changing the eye used for loading refvars, but unlike mine, does not allow bots to evolve different visual strategies w.r.t. the direction their eyes face and/or evolve strategies to narrow or widen their field of view dynamically as needed.

Quote from: Numsgil
The more eyes a bot has, the more nrg the bot has to pay in upkeep costs.  Focusable eyes cost more to maintain than the peripheral kind.

Turning eyes on, adding eyes, and increasing the field of view costs a significant amount of nrg, which should prevent bots from overly abusing their eyes, and encouraging specialization (narrow lookers for dog fighters, broad vision for grazers).
Eyes corrospond to sysvars.  I do not see an easy way to allow some bots to have more eyes than others unless we add additional eye enable/disable sysvars, which personally, I don't favor for several reaons amoung them that a single point mutation can cause a bot to be charged for something it does not really use.  I similarly dislike the idea of trying to determine which eyes a bot is using and how as part of DNA flow by trying to catch sysvar reads in order to charge more or less as that is extremely problematic given the various ways (that you recently pointed out in another thread) that exist to address sysvar locations.

In my model, all bots have the same number of eyes, but how they use them, wide or narrow and where they point them, ahead, to the sides, all around, behind them, wide and overlapping or narrow with gaps between them, or spinning around like a lighthouse beam is completly under the control of selection.  If we wanted to charge bots for changing how their eyes are used (something I don't advocate but am not opposed to) we could choose to charge for sysvar operations such as changing the field of view of an eye or changing it's direction, which would be a a straight forward charge for storing to the corrosponding sysvars.  But the issue of whether we charge for changing how bots use their eyes is IMHO somewhat orthoginal to what the right eye model should be.  

Quote from: Numsgil
We can reserve sysvars 500 through 549 for eyes.

My model requires 3 sysvars for every eye plus whatever refvars we choose to add for describing the properties of objects other than bots.  For each eye, we need one read/write sysvar for the eye's direction relative to .aim, one read/write for the width of the field of view of the eye and one read-only for the eye value itself.

Given that each eye is movable and the width of field of each eye changable, I see no great need to add additional eyes beyond the current nine that we have.
« Last Edit: November 11, 2006, 01:45:57 AM by EricL »
Many beers....

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7713
    • View Profile
Eyes
« Reply #12 on: November 11, 2006, 03:45:06 AM »
It's late night for me, so excuse me if I start to ramble...

Quote from: EricL
I'm not sure I know what you mean by "during a cycle".  I think you mean that the bot can change which eye has the focus and update the refvars based on the new focus eye all in the same cycle.

Say a bot is focussing on eye3.  It can change the focus to eye5 and the refvars are all updated in the same cycle.  Something like ".eye5 setfocus".  All the refvars are updated to reflect the new visual target in eye5.  The bot continues executing its DNA, then decides to switch to eye6.  It does ".eye6 setfocus" and now its looking through .eye6.  All the refvars get updated to reflect what eye6 sees.  All this happens in the same cycle, while the DNA is still executing.  Different parts of your DNA can see refvars through different eyes in the same cycle.

The idea is that we extend the system set up for using multiple ties in a single cycle to eyes as well.  For this to make any sense at all, you'll need to know what I'm talking about with the new tie paradigm.  It should be in the wiki.

In the present system, you would need to physically turn, which would take a full cycle, if you want to see refvars of what's in eye3.

Quote
Eyes corrospond to sysvars.  I do not see an easy way to allow some bots to have more eyes than others unless we add additional eye enable/disable sysvars

The above system I outlined takes care of this by overloading the existing refvars.  .refnrg would correspond to the nrg of whatever you're looking at with your currently active eye.  As you move your focus around, different data gets loaded into the refvars.  You wouldn't need any additional refvars for 1 eye or 1 million, other than to allow for backwards compatibility with the eye1-9s.

Quote
In my model, all bots have the same number of eyes, but how they use them, wide or narrow and where they point them, ahead, to the sides, all around, behind them, wide and overlapping or narrow with gaps between them, or spinning around like a lighthouse beam is completly under the control of selection.  If we wanted to charge bots for changing how their eyes are used (something I don't advocate but am not opposed to) we could choose to charge for sysvar operations such as changing the field of view of an eye or changing it's direction, which would be a a straight forward charge for storing to the corrosponding sysvars.  But the issue of whether we charge for changing how bots use their eyes is IMHO somewhat orthoginal to what the right eye model should be.

I'm thinking in terms of large creatures here.  Binocular vision is a tradeoff against wider field of view.  We could probably have both, but it would mean additional eyes placed on the sides or even the back of our heads.

We can't move our eyes around our head to change our field of view except through evolution.  This is something that's difficult to do with the bots.  We could have them only be able to change their eyes in the first 100 cycles of birth, or have it change randomly, but that's largely missing the point.  If you let bots actively take control of their eyes, at any time in their life, there needs to be a cost associated with that.  My body can't just grow another eye during a battle for nothing.  Supposing it could grow an eye at all, of course.  That eye takes effort to grow and connect into my brain.

The costs associated with vision are going to be instrumental in how they're used.

Ultimately wether we have 9 very customizable eyes or a greater number of more rigidly defined eyes, is largely a matter of taste.  They're probably effectively the same.  Feel free to cherry pick aspects of mine if they suit your fancy.

Offline EricL

  • Administrator
  • Bot God
  • *****
  • Posts: 2266
    • View Profile
Eyes
« Reply #13 on: November 11, 2006, 04:19:05 PM »
Ah.  I understand what you mean now by 'during a cycle'.

Independent of the eye model, I consider it beyond the scope of what I'm willing to do with the VB version in the short run to allow for multiple refvar updates within a single cycle.

I think we are in agreement on the way refvars should work as a means to gather a lot of information about whatever the bot is focused on I.e. the refvars should reflect the properties of whatever the bot is focused on, be it another bot or something else.  How that focus is achevied, by changing the direction of one eye or by switching from one eye to another is still a point of debate.  

I suggest that only advanced, corridnating multi-bots should be capable of true binocular vision, that we should never allow eyes anywhere on a single bot other than at it's center.  Note that since eyes read back distance, we already have some aspects of paralax vision with single bots (we are lacking depth of field) even when only using a single eye.

I am not suggestting that the number of eyes a bot has is changeable or that it can "grow eyes" as needed.  What I am suggesting is that a bot can look around without turning it's body.

Most biological organisms (that have eyes) can move their eyes independent of their body, including us and there are many bilogocal organisms capable of wide ranges of eye socket movements including some with eyestalks that can cover the full 360 degrees with a single eye.

Because all eyes are at the bot's center, what I propose is much more akin to eyeball movement within a socket, not physical movement of an eye around onto different places on an organism and given that bots are virtual organisms, I see no reason for us to artifically restrict the potential degrees of eye socket movement.  

We can have costs for changing the properties of an eye if people want, the direction it points and it's field of view, but I see such costs as much less critical than you do since I equate these operations to simple eyeball movement within a socket and changes in lens focal length, bascially inexpensive muscle movements for biological organisms.

And I should point out there is a natural cost tradeoff of sorts in being able to change the field of view of an eye.  The wider the field, the more I can see, but the less I know aboat the things I do see I.e. I give up resolution for width of field.  This alone seems sufficient to me to provide selective pressure on eye field of view.
Many beers....

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7713
    • View Profile
Eyes
« Reply #14 on: November 11, 2006, 11:06:00 PM »
Sounds good.

Multiple sysvar updates in a cycle isn't as hard as you might think.  The trick is really storing all your data in the eyes and loading them into memory as your focus changes.  If changing your focus is a command, it acts much like a store or inc command, in that its changing memory values.

I can probably rig up this part of the system if you don't want to hassle with it.

Are you still restricting eye5 as being the only eye that can receive focus?  My primary concern is providing potentially different costs between peripheral "eyes", that only give distance and no information, and focus eyes, which give more information.  This is sort of moot if only one eye can be used to focus.

Everything else I'm pretty indifferent about.