Code center > Suggestions
Seeing shots
EricL:
I really don't like the one set of eyes per object-type approach. Its parallel, but it doesn't scale. I envision a system with lots of different types of viewable objects and a vision system where bots not pre-coded for a certain object type can still see it and have some basic logic to deal with unknown object types, such as fleeing from or attacking thigns they don't recognise...
So, I prefer the one set of general purpose eyes approach where refvar style state variables indicate what you happen to be looking at. We already have .reftype, which is how bots today can distinguish between bots and shapes. If we are to enhace viewed object discrimination, I would like us to do so along these lines. We could:
Add additional general purpose eyes. This would allow narrower resolution and finer grain discrimintaiton while maintaining the same the overall view span, applicable to any object type.
Add a minimal set of .refvars for each and every eye. You may have to look at something via the one focus eye to populate the detailed refvars, but we could add a .reftypeeyeX and perhaps one or two others general purpose refvars for each and every eye for better vision parallelism.
Add per eye .focallength. Think of this as a sensitiity adjustment which optomizes the eye for a specific distance range. Done right, eyes with a long focal length would ignore objects at close range and vice versa. Sophsiticated vision systems could optomize focal length for maximum distance change sensitivity over a given range for better tracking or velocity matching. Overlapping eyes of different focal length become very usefull.... We could could even extend the vision range some - the current range can be thought of as the limit of the default focal length setting....
Make use of eye values > 100 or < 0. Not sure how, but it's open territory at the moment...
Another idea is to have a .eyeXmode. Bots could put each eye into one of set of modes, which could filter out certain object types - a mode to see only bots, a mode to see only shots, etc. The same mechanism coudl be used to change how eyes work. FOr example, a mode where isntead of an eye reading how far or close somethign is, it woudl read what percent of the eye is occluded...
googlyeyesultra:
I'm guessing this is getting off topic, so you might want to create a new thread.
Now, back to the (off-topic) discussion. The focus is cool, I agree, but you'll need ridiculous numbers of eyes. Ridiculously huge numbers, like a 25 bare minimum if you want us to try to get a high resolution of varying distances over even a decent angle.
I'm rather braindead, so I'm afraid I can't contribute much more than that. Oh, and yes, numsgil is right about the scale at which trig functions should operate.
Numsgil:
--- Quote from: EricL ---I really don't like the one set of eyes per object-type approach. Its parallel, but it doesn't scale. I envision a system with lots of different types of viewable objects and a vision system where bots not pre-coded for a certain object type can still see it and have some basic logic to deal with unknown object types, such as fleeing from or attacking thigns they don't recognise...
--- End quote ---
You could have the "filters" for the eyes act like the eyewidths. Totally modifiable from situation to situation. And if you like, even have a "no filters" option for this imaginary sysvar where you need to use reftype to understand what you're seeing.
--- Quote ---Add additional general purpose eyes. This would allow narrower resolution and finer grain discrimintaiton while maintaining the same the overall view span, applicable to any object type.
--- End quote ---
I don't like adding more eyes unless absolutely necessary.
--- Quote ---Add a minimal set of .refvars for each and every eye. You may have to look at something via the one focus eye to populate the detailed refvars, but we could add a .reftypeeyeX and perhaps one or two others general purpose refvars for each and every eye for better vision parallelism.
--- End quote ---
I don't see this scaling very well.
--- Quote ---Add per eye .focallength. Think of this as a sensitiity adjustment which optomizes the eye for a specific distance range. Done right, eyes with a long focal length would ignore objects at close range and vice versa. Sophsiticated vision systems could optomize focal length for maximum distance change sensitivity over a given range for better tracking or velocity matching. Overlapping eyes of different focal length become very usefull.... We could could even extend the vision range some - the current range can be thought of as the limit of the default focal length setting....
--- End quote ---
Focal length seems more suited to information gathering than to occlussion issues. That is, give only vague information on an object unless the focal length is right. But I see this as more limiting what bots can see at first glance and less adding any new features.
--- Quote ---Another idea is to have a .eyeXmode. Bots could put each eye into one of set of modes, which could filter out certain object types - a mode to see only bots, a mode to see only shots, etc. The same mechanism coudl be used to change how eyes work. FOr example, a mode where isntead of an eye reading how far or close somethign is, it woudl read what percent of the eye is occluded...
--- End quote ---
Ah, looks like you just misunderstood what I was driving at. This is exactly what I was thinking. Have each eye's mode be settable independantly of each other.
EricL:
--- Quote from: Numsgil ---Ah, looks like you just misunderstood what I was driving at. This is exactly what I was thinking. Have each eye's mode be settable independantly of each other.
--- End quote ---
Ah, yes indeed. I was thinking you wanted to add 9 eye sysvars for every new object type! Great minds think alike...
Navigation
[0] Message Index
[*] Previous page
Go to full version