Code center > Darwinbots3

Vision

(1/2) > >>

Numsgil:
I'm playing with vision and how it might work for DB3.  I'm trying to mimic how real life (mammal) eyes work, without compromising the rest of the sim or making things unfair, etc.  Here's what I'm thinking so far:

Each bot can have up to one eye per side (so up to four total).  The exact position of the eye on the side is something a bot can control, but probably not change easily.  All eyes have a fixed field of view (something slightly less than 180 degrees probably), but they can also swivel in their sockets.

A bot eye has rods distributed along the periphery of the eye, and cones in the center.  How far center the transition from cones to rods is can be controlled by the bot, but again not changed easily.  Rods are extremely sensitive, so they're good at picking out faint objects (small bots or large bots very far away).  Especially if those faint objects are moving.  But they're lousy at determining details, so most of the refvars won't display for those objects.  Cones are less sensitive, so a bot has to be much closer to use them.  But they're very good at picking up detail, so refvars will display for whatever it's looking at.

The overall sensitivity of the eye is again something a bot can set, but not change easily.  With more sensitive eyes coming with a higher price somehow.

If a bot arranges its eyes so that the fields of view overlap, the range is extended for both rods and cones that are in the overlap area by the "quadratic summation model": [img class=\"tex\" src=\"http://www.forkosh.dreamhost.com/mathtex.cgi?\sqrt{S_1^2 + S_2^2}\" alt=\"LaTeX: \sqrt{S_1^2 + S_2^2}\" /] ([img class=\"tex\" src=\"http://www.forkosh.dreamhost.com/mathtex.cgi?S_n\" alt=\"LaTeX: S_n\" /] is the sensitivity of the nth eye) (source).  So binocular vision allows you to detect objects and determine details farther out than stereoscopic vision.  But it also means you can't look over your shoulder easily, so to speak.  And you also probably need to physically orient yourself in order to use it.

Each bot has "system codules" for vision.  One codule handles signals from the rods.  For every object visible by the rods the rod codule gets called, and the relevant sysvars are updated.  The objects are sorted from most peripheral to most center so a bot can overwrite commands issued for more peripheral objects with commands to handle more center objects.  Likewise for cones, a cone codule gets called after all rod vision commands are finished.  With objects getting called from most peripheral first to most center last.  Once the vision codules are finished, the rest of the DNA starts executing.  So vision happens before any other DNA gets called.

An eye can potentially see infinitely far away, dependent only on the  apparent size of what it's looking at.  So as bots are idling because they don't see anything,  the veggies might grow large enough to become visible and the bot can  go off chasing it.  Likewise lots of smaller veggies clumped together might also become visible as the clump gets more and more bots.  Probably the way it would work is that rods are sort of like how non eye5s work right now, but with higher resolution.  If enough rods have the same value, it registers.  Otherwise it's ignored.  Or maybe rods normally have a random static signal, and a bot has to do signal analysis in its DNA to determine what might be a signal or not (maybe its a codule which gets called before the other vision codules I mentioned above).  Or maybe a bot just sets a threshold for how statistically significant a signal has to be before it registers.

Last, depending, eyes might be the start of a whole new feature involving organs that can get damaged from battle and repaired, and maybe bots can swivel eyes all the way back into their bodies to protect them sort of like sharks.  They'd probably tie in to reproduction somehow, like depending on how a bot splits the child might get an eye and the parent keeps an eye, and they both have to grow another eye.  But I'll need to think more about organs so that's really a whole other topic.

Prsn828:
Well, for protection how does an eyelid sound? I would imagine if sand came in contact with eye it would also decrease its usability, and that eyes would deteriorate over time.

For reproduction, why not keep the newborn inside the parents body until it develops, then "birth" it by sending it out of the parents body.

Arzgarb:

--- Quote from: Numsgil ---An eye can potentially see infinitely far away, dependent only on the  apparent size of what it's looking at. So as bots are idling because they don't see anything,  the veggies might grow large enough to become visible and the bot can  go off chasing it.  Likewise lots of smaller veggies clumped together might also become visible as the clump gets more and more bots.  Probably the way it would work is that rods are sort of like how non eye5s work right now, but with higher resolution.  If enough rods have the same value, it registers.  Otherwise it's ignored.  Or maybe rods normally have a random static signal, and a bot has to do signal analysis in its DNA to determine what might be a signal or not (maybe its a codule which gets called before the other vision codules I mentioned above).  Or maybe a bot just sets a threshold for how statistically significant a signal has to be before it registers.
--- End quote ---
"Infinitely far away" sounds quite scary when the performance goal is 10 cycles/sec with 1000 bots, since in the current version vision calculation takes the most processing power in a sim. Especially with 4 eyes and much better resolution. But if you can figure out a clever algorithm for the job, this would be really great.

Also, the normally random signals that the bot has to process itself sound nice, but it'll need to be robust. Like a system codule that takes input (some rod/cone values in the int stack) and produces output (last value of boolean stack?) that is then used to decide whether or not the signal will then be processed by the actual eye codules. But this needs thinking, because a bot can't know how many rods will be activated, and so how many input values it needs to process...

Maybe the codule gets called for every single rod except the first one. It gets 2 input values: the value of this rod and the previous one (or multiple values from both, depending on final implementation), and produces one (boolean) output: are they "connected"? This way connected rods form chains, and if the length of a chain reaches a treshold (decided by a sysvar?), it will get registered as a signal. But of course, this would be a potential performance bottleneck, depending on eye resolution.

Testlund:
I think this is moving too much away from the unicellular concept. I would prefer touch senses instead and eyes that can only sense darkness and light and maybe just recognize movement at close range.  

Prsn828:
I think Testlund is right.

Here comes a new idea:

Keep the concept of rods and cylinders.  Scrap the idea of multiple eyes.
I say each bot should get only one eye; after all, in real life an organism doesn't even get that, so if DB3 decides to evolve usable vision, it will have to evolve eye cells.

Even tough is a sense reserved for multicellular organisms, but due to computing limits, I think it is permissible in DB.

If we do decide on bots with individual eyes, we will need to be able to garuntee that ties will work precisely as they are supposed to, and can be controlled very delicately.

Finally, if we need to, I think each bot could specify a unique eye codule in its DNA that would have input values given by the eye, and would process those values.
These input values would be like sysvars, but they would be accessible only to the eye codule, and perhaps only one store value could be written to by the eye codule to prevent overuse of that codule.

Navigation

[0] Message Index

[#] Next page

Go to full version