I'm playing with vision and how it might work for DB3. I'm trying to mimic how real life (mammal) eyes work, without compromising the rest of the sim or making things unfair, etc. Here's what I'm thinking so far:
Each bot can have up to one eye per side (so up to four total). The exact position of the eye on the side is something a bot can control, but probably not change easily. All eyes have a fixed field of view (something slightly less than 180 degrees probably), but they can also swivel in their sockets.
A bot eye has rods distributed along the periphery of the eye, and cones in the center. How far center the transition from cones to rods is can be controlled by the bot, but again not changed easily. Rods are extremely sensitive, so they're good at picking out faint objects (small bots or large bots very far away). Especially if those faint objects are moving. But they're lousy at determining details, so most of the refvars won't display for those objects. Cones are less sensitive, so a bot has to be much closer to use them. But they're very good at picking up detail, so refvars will display for whatever it's looking at.
The overall sensitivity of the eye is again something a bot can set, but not change easily. With more sensitive eyes coming with a higher price somehow.
If a bot arranges its eyes so that the fields of view overlap, the range is extended for both rods and cones that are in the overlap area by the "quadratic summation model": [img class=\"tex\" src=\"http://www.forkosh.dreamhost.com/mathtex.cgi?\sqrt{S_1^2 + S_2^2}\" alt=\"LaTeX: \sqrt{S_1^2 + S_2^2}\" /] ([img class=\"tex\" src=\"http://www.forkosh.dreamhost.com/mathtex.cgi?S_n\" alt=\"LaTeX: S_n\" /] is the sensitivity of the nth eye) (
source). So binocular vision allows you to detect objects and determine details farther out than stereoscopic vision. But it also means you can't look over your shoulder easily, so to speak. And you also probably need to physically orient yourself in order to use it.
Each bot has "system codules" for vision. One codule handles signals from the rods. For every object visible by the rods the rod codule gets called, and the relevant sysvars are updated. The objects are sorted from most peripheral to most center so a bot can overwrite commands issued for more peripheral objects with commands to handle more center objects. Likewise for cones, a cone codule gets called after all rod vision commands are finished. With objects getting called from most peripheral first to most center last. Once the vision codules are finished, the rest of the DNA starts executing. So vision happens before any other DNA gets called.
An eye can potentially see infinitely far away, dependent only on the apparent size of what it's looking at. So as bots are idling because they don't see anything, the veggies might grow large enough to become visible and the bot can go off chasing it. Likewise lots of smaller veggies clumped together might also become visible as the clump gets more and more bots. Probably the way it would work is that rods are sort of like how non eye5s work right now, but with higher resolution. If enough rods have the same value, it registers. Otherwise it's ignored. Or maybe rods normally have a random static signal, and a bot has to do signal analysis in its DNA to determine what might be a signal or not (maybe its a codule which gets called before the other vision codules I mentioned above). Or maybe a bot just sets a threshold for how statistically significant a signal has to be before it registers.
Last, depending, eyes might be the start of a whole new feature involving organs that can get damaged from battle and repaired, and maybe bots can swivel eyes all the way back into their bodies to protect them sort of like sharks. They'd probably tie in to reproduction somehow, like depending on how a bot splits the child might get an eye and the parent keeps an eye, and they both have to grow another eye. But I'll need to think more about organs so that's really a whole other topic.