Darwinbots Forum
Code center => Suggestions => Topic started by: EricL on December 26, 2007, 03:48:30 PM
-
In playing around with building a bot that can catch Shrinking Violets, its apparent that having the sight distance the same for all species makes sneaking up and "surprising" a species rather difficult, particularily when everyone is using omnieye vision.
I'm still planning on adding .camo at some point, but what do people think about the idea of making an eye's sight distance a function of it's width? The wider the eye, the shorter the distance it can see. If we assume the sight distance for the default PI/18 eye width is 1, then I might scale this such that the sight distance S for a specific eye width W as a function of W
S = 1 - log(W)/2 where W is the eyewidth as a mutplie of standard Pi/18 eye widths.
Thus, when the .eyewidth is 0 and thus the eye has the standard width, W=1 and thus S=1.
For a 360 degree omnieye, W=36. S ~ 0.22.
For an eye half the normal width, W = 0.5, S~1.15.
For the narrowest eye possible, W=1/35, S~2.54.
So, omnieye bots can only see about 1/4th as far as bots with the standard eye wdith. Bots using a long distance eagle eye can potentially see 2.5 times as far as standard bots, but with an incredibly narrow field.
This change might actually improve performance, particularly over sims with omnieye bots.
What think?
-
sounds reasonable
-
Totally ok.
Predator animals have a narrower field of view pointed ahead because (I think) they get better resolution for details, therefore being able to notice smaller/farther/immobile things.
-
"Field of view pointed ahead" is necessary for depth perception. If you need to catch something in front of you (be it a prey or just a branch on a tree), then you need very accurate measurement of how far it is.
I like the idea of balancing depth and broadness, mostly for its simplicity. But I think that accuracy of measurement and the concept of "focus" should be somewhere there as well. Maybe in the future.
-
Sounds good, might even have more of a predator/prey dynamic from it.
-
FYI, this has been implemented in 2.43z.
-
to optimise eyey width to eye distance each eye will need to be 36 degrees
-
to optimise eyey width to eye distance each eye will need to be 36 degrees
I don't understand you, for what, why.
Please explain.
-
if you want to be able to see 360 egrees around and need to see as far as you can you need to divide the 10 eyes into equal areas
360 (the number of degrees in a circle) divided by 10 (the number of eyes in a bot ) = 36.
thus making 36 degress (not bot mesurments) the best way to see as far as possible while seeing all around you
-
And your 360 degree eye bot will be easily picked off by my 359 degree eye bot that can see you before you can see it.
-
if you want to be able to see 360 egrees around and need to see as far as you can you need to divide the 10 eyes into equal areas
360 (the number of degrees in a circle) divided by 10 (the number of eyes in a bot ) = 36.
thus making 36 degress (not bot mesurments) the best way to see as far as possible while seeing all around you
Please...
First, to state the obvious, bots have 9 eyes, not 10.
The area an eye of width W (0 <= W <=36) is given by:
A = (1 - log(W)/2)^2 * PI) * (W/36)
If someone wants to find the eye width that mazimizes coverred area for a specific eye, one needs to maximize the value of A over the range 0 <= W <=36.
-
I need to point out that in the corse of implimenting this, it became apparent that VB6 does not include a log operator (base 10). It only includes a natural ln operator (base e). I'm lazy, so the actual formula as implimented in the code is:
S = 1 - ln(W)/4 where W is the eyewidth as a mutplie of standard Pi/18 eye widths.
See the first post in this topic for context.
I would appreciate any feedback on whether the eyesight sensitivity represented by this formula meets with people's approval and expectations.
-
You can implement log10 by doing ln(x) / ln(10)
-
You can implement log10 by doing ln(x) / ln(10)
I'm sorry, what part of "I'm lazy" did you not understand?
That's bascially what I did. ln(10) ~= 2.3. I rounded down.
-
You can implement log10 by doing ln(x) / ln(10)
I'm sorry, what part of "I'm lazy" did you not understand?
That's bascially what I did. ln(10) ~= 2.3. I rounded down.
It may not be best for rounding down. Decimals arent really supported anyways, so it doesn't seem even necesary to use algebraiclly derived forumlas for a geometric process. Just use geometry. The function of log returns the exponent; where 10^x = 3 | x has to equal the proper exponent to equal that. So don't use log for this problem. Whil it is a great idea to have a logorythmic eye form, it may be best to go with the more natural modded, as is the human eyesight. Modular functions of the eye length would mean that the result could be returned to compare the idea that we have here, bot length. The eyelength is already dependant on the radius of the Bot, is it not? If I am not mistaken this was already used for how eyes return values, with 100 being equal to a bots radius twice. It seems this was a very great conecpt, the eyes were on the outside and no matter what, the eye returned some value equal to its radius. Without this, development of eyesight may be difficulkt genetically, and could hinder the creation of mutatant genuius creation of pure chance. This also means a small anoying miny bot would not be able to see as far as a bot twice its size with equal widths. I do see where you're coming from with the log form, it is a focal equation; but I think for easy implementation in genetic coding, it should be a modular form. Since we know the radius of the bot as a var, the eyewidth and length will automatically be limited by this, or at least should be. When I'm not so tired I'll present the formulas of these ideas; esentially, true vision could be achieved with such a modular function, as the log is basically a more complex form to the idea, I like where you're coming from, but unless eyes and focal magnification (or length magnification as you have changed it to do so), esentially, a bot could have 0 eyesight and still work.
-
Not sure I follow you. I only have a minor in math...
Things are actually a little more complicated. Let me explain.
The equation above is only used to determine the maximum distance an eye can see given it's width. It gives the relationship between eye width and sight distance and yes, it could be changed to a simple lookup table. If there are no bots within this maximum distance (edge to edge) in the field of view of the specific eye, the eye value will be 0. For the default eye width of pi/18, this sight distance is 1440, edge to edge. So yes, larger bots can see farther and will be seen sooner relative to smaller bots with the same center location.
This max sight distance S is calculated as a function of the eye's width and then the actual value returned by the eye is determined by a different formula:
V = 1/((d+10)/S)^2 where d is the actual edge to edge distance between the two bots.
That is, an eye value is roughly the inverse square of the percentage of the maximum distance represented by the edge to edge distance between the bots. (The + 10 is there for divide by 0 protection and to scale values in such a way to use most of the positive value range. I choose a formula which attempts to strike a balance between giving traditional eye values (1-100) at mid to long range that legacy bots would be used to while providing much greater resolution at close range. This is something I am soliciting empirical feedback on.
Note that overlapping bots will generate eye values of 32000 (if the bot is within the eye's field of view). Same for when a bot is inside or overlaps a shape. Eyes can be thought of as being located at the bot's center, so an eye's field of view includes the field of view inside the bot, but you can think of the eye value as "overloading" when a bot overlaps what it is viewing.
For shapes, d is the closest part of the shape within the eye's field of view. This allows for very precise eyesight at close range for finding shape corners, the angle of incidence of the line of sight and so on. But this is not currently the case for viewed bots however. Today, the center of the viewed bot plus it's radius are used determine if the bot is in the field of view and within the sight range and the eye value is calculated using this d, not the d of the closest point of the bot in the field of view. This can give somewhat inacurate distance values in cases where the closest part of the bot is not in the field of view (when a narrow eye is sighting along a bot's side for example) and in some cases, allows for seeing bots just beyond the sight distance which should really not be visible. It is on my worklist to address this.
-
I can see the relationship as a mod function, with maybe even just a quadratic with two radicals, which could be easier to program. But unfortunately my mind is so zapped; too many things not enough time or care. When I have the time (maybe next month) I'll elaborate from where I coming from: essentially the values the eye returned would change accordingly.
-
Ok, so a mod is a remainder. This means any ratio has a mod. So we know no matter what their is a maximum length which will never be exceeded, this is our cieling, or higher Z-Alpha plane. There is also a width, which has a log limit, where it can never reach 0; this means that 0 is the hole, so we can redefine 0 in a mod. We know that any bot can only get a value of 2.54 in the S. So we say w mod S*D. That should give you better return function from an eye system where width and distance play a role. The inverse square law should work flawlessly, but you need floating decimal points.
-
Okay, I'm fairly certain you're just making up terms now What is a "Z-Alpha plane". Is it like the "ZZ9 plural Z alpha"? You have entirely lost me in your train of thought.
-
Yeah, I should have used the term lattice, but I couldn't remember from my math notes.
from wiki- If we consider the lattice generated by a constant α and a variable z, then F(Λ) is an analytic function of z.
-
Sorry I only did intro to abstract algebra. We never got as far as lattices. Or is it topology? Either way, you're going to have to stick with algebra and calc if you want Eric and I to follow you.
-
Sounds fair enough to me.
If I recall, the inverse square law requires no interference. So what if we added heat vectors? This would change the speed of light and cause a bot to see further in some places.