Author Topic: Vision  (Read 4240 times)

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Vision
« on: May 16, 2009, 03:25:02 AM »
I'm playing with vision and how it might work for DB3.  I'm trying to mimic how real life (mammal) eyes work, without compromising the rest of the sim or making things unfair, etc.  Here's what I'm thinking so far:

Each bot can have up to one eye per side (so up to four total).  The exact position of the eye on the side is something a bot can control, but probably not change easily.  All eyes have a fixed field of view (something slightly less than 180 degrees probably), but they can also swivel in their sockets.

A bot eye has rods distributed along the periphery of the eye, and cones in the center.  How far center the transition from cones to rods is can be controlled by the bot, but again not changed easily.  Rods are extremely sensitive, so they're good at picking out faint objects (small bots or large bots very far away).  Especially if those faint objects are moving.  But they're lousy at determining details, so most of the refvars won't display for those objects.  Cones are less sensitive, so a bot has to be much closer to use them.  But they're very good at picking up detail, so refvars will display for whatever it's looking at.

The overall sensitivity of the eye is again something a bot can set, but not change easily.  With more sensitive eyes coming with a higher price somehow.

If a bot arranges its eyes so that the fields of view overlap, the range is extended for both rods and cones that are in the overlap area by the "quadratic summation model": [img class=\"tex\" src=\"http://www.forkosh.dreamhost.com/mathtex.cgi?\sqrt{S_1^2 + S_2^2}\" alt=\"LaTeX: \sqrt{S_1^2 + S_2^2}\" /] ([img class=\"tex\" src=\"http://www.forkosh.dreamhost.com/mathtex.cgi?S_n\" alt=\"LaTeX: S_n\" /] is the sensitivity of the nth eye) (source).  So binocular vision allows you to detect objects and determine details farther out than stereoscopic vision.  But it also means you can't look over your shoulder easily, so to speak.  And you also probably need to physically orient yourself in order to use it.

Each bot has "system codules" for vision.  One codule handles signals from the rods.  For every object visible by the rods the rod codule gets called, and the relevant sysvars are updated.  The objects are sorted from most peripheral to most center so a bot can overwrite commands issued for more peripheral objects with commands to handle more center objects.  Likewise for cones, a cone codule gets called after all rod vision commands are finished.  With objects getting called from most peripheral first to most center last.  Once the vision codules are finished, the rest of the DNA starts executing.  So vision happens before any other DNA gets called.

An eye can potentially see infinitely far away, dependent only on the  apparent size of what it's looking at.  So as bots are idling because they don't see anything,  the veggies might grow large enough to become visible and the bot can  go off chasing it.  Likewise lots of smaller veggies clumped together might also become visible as the clump gets more and more bots.  Probably the way it would work is that rods are sort of like how non eye5s work right now, but with higher resolution.  If enough rods have the same value, it registers.  Otherwise it's ignored.  Or maybe rods normally have a random static signal, and a bot has to do signal analysis in its DNA to determine what might be a signal or not (maybe its a codule which gets called before the other vision codules I mentioned above).  Or maybe a bot just sets a threshold for how statistically significant a signal has to be before it registers.

Last, depending, eyes might be the start of a whole new feature involving organs that can get damaged from battle and repaired, and maybe bots can swivel eyes all the way back into their bodies to protect them sort of like sharks.  They'd probably tie in to reproduction somehow, like depending on how a bot splits the child might get an eye and the parent keeps an eye, and they both have to grow another eye.  But I'll need to think more about organs so that's really a whole other topic.

Offline Prsn828

  • Bot Destroyer
  • ***
  • Posts: 139
    • View Profile
Vision
« Reply #1 on: May 18, 2009, 09:37:24 AM »
Well, for protection how does an eyelid sound? I would imagine if sand came in contact with eye it would also decrease its usability, and that eyes would deteriorate over time.

For reproduction, why not keep the newborn inside the parents body until it develops, then "birth" it by sending it out of the parents body.
So, what will it be? Will you submit to my will, or must I bend reality to suit my needs?
Better answer before I do BOTH!

Offline Arzgarb

  • Bot Neophyte
  • *
  • Posts: 8
    • View Profile
Vision
« Reply #2 on: May 19, 2009, 12:23:50 PM »
Quote from: Numsgil
An eye can potentially see infinitely far away, dependent only on the  apparent size of what it's looking at. So as bots are idling because they don't see anything,  the veggies might grow large enough to become visible and the bot can  go off chasing it.  Likewise lots of smaller veggies clumped together might also become visible as the clump gets more and more bots.  Probably the way it would work is that rods are sort of like how non eye5s work right now, but with higher resolution.  If enough rods have the same value, it registers.  Otherwise it's ignored.  Or maybe rods normally have a random static signal, and a bot has to do signal analysis in its DNA to determine what might be a signal or not (maybe its a codule which gets called before the other vision codules I mentioned above).  Or maybe a bot just sets a threshold for how statistically significant a signal has to be before it registers.
"Infinitely far away" sounds quite scary when the performance goal is 10 cycles/sec with 1000 bots, since in the current version vision calculation takes the most processing power in a sim. Especially with 4 eyes and much better resolution. But if you can figure out a clever algorithm for the job, this would be really great.

Also, the normally random signals that the bot has to process itself sound nice, but it'll need to be robust. Like a system codule that takes input (some rod/cone values in the int stack) and produces output (last value of boolean stack?) that is then used to decide whether or not the signal will then be processed by the actual eye codules. But this needs thinking, because a bot can't know how many rods will be activated, and so how many input values it needs to process...

Maybe the codule gets called for every single rod except the first one. It gets 2 input values: the value of this rod and the previous one (or multiple values from both, depending on final implementation), and produces one (boolean) output: are they "connected"? This way connected rods form chains, and if the length of a chain reaches a treshold (decided by a sysvar?), it will get registered as a signal. But of course, this would be a potential performance bottleneck, depending on eye resolution.

Offline Testlund

  • Bot God
  • *****
  • Posts: 1574
    • View Profile
Vision
« Reply #3 on: May 19, 2009, 12:24:33 PM »
I think this is moving too much away from the unicellular concept. I would prefer touch senses instead and eyes that can only sense darkness and light and maybe just recognize movement at close range.  
The internet is corrupt and controlled by criminally minded people.

Offline Prsn828

  • Bot Destroyer
  • ***
  • Posts: 139
    • View Profile
Vision
« Reply #4 on: May 19, 2009, 01:15:55 PM »
I think Testlund is right.

Here comes a new idea:

Keep the concept of rods and cylinders.  Scrap the idea of multiple eyes.
I say each bot should get only one eye; after all, in real life an organism doesn't even get that, so if DB3 decides to evolve usable vision, it will have to evolve eye cells.

Even tough is a sense reserved for multicellular organisms, but due to computing limits, I think it is permissible in DB.

If we do decide on bots with individual eyes, we will need to be able to garuntee that ties will work precisely as they are supposed to, and can be controlled very delicately.

Finally, if we need to, I think each bot could specify a unique eye codule in its DNA that would have input values given by the eye, and would process those values.
These input values would be like sysvars, but they would be accessible only to the eye codule, and perhaps only one store value could be written to by the eye codule to prevent overuse of that codule.
So, what will it be? Will you submit to my will, or must I bend reality to suit my needs?
Better answer before I do BOTH!

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Vision
« Reply #5 on: May 20, 2009, 03:19:49 AM »
Quote from: Arzgarb
"Infinitely far away" sounds quite scary when the performance goal is 10 cycles/sec with 1000 bots, since in the current version vision calculation takes the most processing power in a sim. Especially with 4 eyes and much better resolution. But if you can figure out a clever algorithm for the job, this would be really great.

I have a few ideas for clever algorithms.  I can explain what I'm thinking if anyone is interested, but it would probably be pretty technical.  And failing that, infinite distance is the goal, but I'll settle for less as necessary.  Plus, the problem bears a lot of similarity to ray casting, which has some clever algorithms figured out.

Quote
Also, the normally random signals that the bot has to process itself sound nice, but it'll need to be robust. Like a system codule that takes input (some rod/cone values in the int stack) and produces output (last value of boolean stack?) that is then used to decide whether or not the signal will then be processed by the actual eye codules. But this needs thinking, because a bot can't know how many rods will be activated, and so how many input values it needs to process...

I have a book on signal processing I'll look through.  I'll have a better idea then of whether the processing should actually be done by the bot or if it's just something we simulate behind the scenes.  My guess is that in the end there's an "optimal" algorithm for it that would be too complex for bots (and run time), so we'll just simulate it using a few sysvars to tweak parameters.

Quote from: Testlund
I think this is moving too much away from the unicellular concept. I would prefer touch senses instead and eyes that can only sense darkness and light and maybe just recognize movement at close range.  

Believe it or not I'm keeping that argument in mind, since I know it's one you have   I hope to expand out other senses, like touch, hearing (basically detecting vibrartions), and smell, so you can run blind sims if you want.

Also, since veggies will need the sim to have light levels (sort of like pond mode), we could tie vision into that system.  So at very low light levels bots have to rely on other senses.  That way it's possible to create an environment where blind bots and vision bots might coexist with different niches.


Quote from: Prsn828
I think Testlund is right.

Here comes a new idea:

Keep the concept of rods and cylinders.  Scrap the idea of multiple eyes.
I say each bot should get only one eye; after all, in real life an organism doesn't even get that, so if DB3 decides to evolve usable vision, it will have to evolve eye cells.

Even tough is a sense reserved for multicellular organisms, but due to computing limits, I think it is permissible in DB.

If we do decide on bots with individual eyes, we will need to be able to garuntee that ties will work precisely as they are supposed to, and can be controlled very delicately.

Finally, if we need to, I think each bot could specify a unique eye codule in its DNA that would have input values given by the eye, and would process those values.
These input values would be like sysvars, but they would be accessible only to the eye codule, and perhaps only one store value could be written to by the eye codule to prevent overuse of that codule.

In real life, individual cells just have eye spots.  They let the cell detect brightness and maybe color.  Building a system of vision from this for Darwinbots would be possible, but it would really be an amazing feat of bot engineering.  Even a primitive eye involves hundreds of cells, and some structural engineering (envaginating the schmear of vision cells so the organism can sense direction).  That doesn't include the engineering (and behind the scenes physics) necessary to create lenses.

On top of that, doing detailed vision vs. doing simple eyespot vision are roughly computationally equivalent, since the hard part is doing the "what can see what" part.

So basically there's a whole spectrum of complexity available.  At one end is something more realistic to cells, and at the other is something more like octopus eyes.  While eye spots are more realistic, I think octopus eyes creates a more interesting and identifiable sim for the average user.  The end result is probably something like that first stage in Spore, or fl0w.