Author Topic: Seeing shots  (Read 3419 times)

Offline EricL

  • Administrator
  • Bot God
  • *****
  • Posts: 2266
    • View Profile
Seeing shots
« on: August 20, 2007, 10:24:17 AM »
Quote from: googlyeyesultra
Hmm, another way to do it would be to predict where and when the returning shots would be, based on the velocities, store that in a queue, and go run around and grab them while constantly firing.
Or we could make shots visible......  Woudl not be difficult.  .reftype would indicate they are shots and not bots or shapes (the two things bots can see already).  There would be a perf impact, proportional to the number of shots in the sim....
Many beers....

Offline googlyeyesultra

  • Bot Destroyer
  • ***
  • Posts: 109
    • View Profile
Seeing shots
« Reply #1 on: August 20, 2007, 10:37:13 AM »
I'd rather not take the perf inpact. Besides, you'd either need two sets of eyes or you'd end of obscuring your own vision with your own shots. I'll be happy if you give me some trig functions.

Offline EricL

  • Administrator
  • Bot God
  • *****
  • Posts: 2266
    • View Profile
Seeing shots
« Reply #2 on: August 20, 2007, 04:33:46 PM »
I do plan to make shots visible at some point.  It will be optional, like it is with shapes, so the price w.r.t. perf will be something people can choose to pay or not.  There will be no perf impact if people elect not to enable shot visiblity.

You will indeed be able to see your own shots.  Refvars will allow you to distinguish them from other's.  If you want finer eyesight resolution so as to see past your own shots at what you are shooting at, well, that is what eyewidth is all about.  If people find that insufficient, we could add a per eye focal length, something that might be usefull in any respect.

A little philosophy.  Incomplete physics bother me.  That fact that shots are invisble, that ties are as well, that there is no collision detection between ties and bots or ties and shots or ties and shapes, that ties have no fluid resistance, that our current movement paradym defines the laws of physics w.r.t angular momentum and so on...  these limititations in the physics limit the richness of what can be designed or evolved.  Yes, there is a perf impact to implementing all of these things, but cpu cycles get cheaper as time goes by.  I vote that over time, we spend the majority of that currency on increasing environmental richness, which I submit includes in part, completing the physics of existing morphological artifacts.

In parallel, I will take a look at adding some trig functions.  If you can provide specifics and prioritize what you would like to see, that would help.  If it isn't too much work, you may well see it relatively soon, probably sooner than visible shots.
Many beers....

Offline googlyeyesultra

  • Bot Destroyer
  • ***
  • Posts: 109
    • View Profile
Seeing shots
« Reply #3 on: August 20, 2007, 04:57:57 PM »
My top priorities are entering the angle and the hypotenuse, and returning one of the other two sides (aka, sine and consine). Either that, or some sort of angle to slope and slope to angle command would be alright.

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Seeing shots
« Reply #4 on: August 20, 2007, 08:50:24 PM »
For sine and cosine, have them scaled so that a full circle is 1256 units.  Have them return values that are also * 1256.  That way the input and output are scaled to each other and the rest of the sysvars.

For vision, seeing shots is great.  I don't mind perf impacts, especially when they can be optimized to O(nlogn).  However, this falls into a broader category of head scratchers.  Even when you can change your vision radius, it's theoretically possible to have some shot block your vision, or even some very small bot if it's strategically placed.  The math is all set up as if the bots are all in the same plane, but 3D.  Which means larger bots would be visible even if there's a smaller bot or shot in front of them since they still rise above the horizon.  This is how I reason that bots can easily tell how large the other is, even if they're partially obscured from each other.  In this view, a shot blocking the sight of another bot is absurd.

I'm running into this problem as I design the workings for the next version as well.  The current system that has several eyes trained at different spots that each only return the closest object.  While this might work as the system gets more complex, I think it's ill suited for it.

For the new versions I'm trying to find a way to have each visible object be given it's own processing thread.  For the current 2.X program, the only reasonable approach I can think of is to have the ability to devote an eye to a specific object.  One eye only detects shapes.  The other only detects bots.  Another only detects shots.  Another ties, etc. etc.

Quote
that ties have no fluid resistance
Ugh, this was such a headache.  Carlo originally had some code set up for this, but it was extremely simplistic.  I tried solving it more generally, but since you can't define a tie as anything except its end points (the bots) the math is incredibly messy.  Which is part of the reason for the new version I'm moving to bots being the ones that are capsule shaped, and connections being single points.

Offline EricL

  • Administrator
  • Bot God
  • *****
  • Posts: 2266
    • View Profile
Seeing shots
« Reply #5 on: August 20, 2007, 09:31:59 PM »
I really don't like the one set of eyes per object-type approach.  Its parallel, but it doesn't scale.  I envision a system with lots of different types of viewable objects and a vision system where bots not pre-coded for a certain object type can still see it and have some basic logic to deal with unknown object types, such as fleeing from or attacking thigns they don't recognise...

So, I prefer the one set of general purpose eyes approach where refvar style state variables indicate what you happen to be looking at.  We already have .reftype, which is how bots today can distinguish between bots and shapes.  If we are to enhace viewed object discrimination, I would like us to do so along these lines.  We could:

Add additional general purpose eyes.  This would allow narrower resolution and finer grain discrimintaiton while maintaining the same the overall view span, applicable to any object type.

Add a minimal set of .refvars for each and every eye.  You may have to look at something via the one focus eye to populate the detailed refvars, but we could add a .reftypeeyeX and perhaps one or two others general purpose refvars for each and every eye for better vision parallelism.

Add per eye .focallength.  Think of this as a sensitiity adjustment which optomizes the eye for a specific distance range.  Done right, eyes with a long focal length would ignore objects at close range and vice versa.  Sophsiticated vision systems could optomize focal length for maximum distance change sensitivity over a given range for better tracking or velocity matching.  Overlapping eyes of different focal length become very usefull....  We could could even extend the vision range some - the current range can be thought of as the limit of the default focal length setting....

Make use of eye values > 100 or < 0.  Not sure how, but it's open territory at the moment...

Another idea is to have a .eyeXmode.  Bots could put each eye into one of set of modes, which could filter out certain object types - a mode to see only bots, a mode to see only shots, etc.  The same mechanism coudl be used to change how eyes work.  FOr example, a mode where isntead of an eye reading how far or close somethign is, it woudl read what percent of the eye is occluded...
« Last Edit: August 20, 2007, 09:39:38 PM by EricL »
Many beers....

Offline googlyeyesultra

  • Bot Destroyer
  • ***
  • Posts: 109
    • View Profile
Seeing shots
« Reply #6 on: August 20, 2007, 09:43:37 PM »
I'm guessing this is getting off topic, so you might want to create a new thread.

Now, back to the (off-topic) discussion. The focus is cool, I agree, but you'll need ridiculous numbers of eyes. Ridiculously huge numbers, like a 25 bare minimum if you want us to try to get a high resolution of varying distances over even a decent angle.

I'm rather braindead, so I'm afraid I can't contribute much more than that. Oh, and yes, numsgil is right about the scale at which trig functions should operate.

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Seeing shots
« Reply #7 on: August 20, 2007, 10:34:47 PM »
Quote from: EricL
I really don't like the one set of eyes per object-type approach.  Its parallel, but it doesn't scale.  I envision a system with lots of different types of viewable objects and a vision system where bots not pre-coded for a certain object type can still see it and have some basic logic to deal with unknown object types, such as fleeing from or attacking thigns they don't recognise...

You could have the "filters" for the eyes act like the eyewidths.  Totally modifiable from situation to situation.  And if you like, even have a "no filters" option for this imaginary sysvar where you need to use reftype to understand what you're seeing.

Quote
Add additional general purpose eyes.  This would allow narrower resolution and finer grain discrimintaiton while maintaining the same the overall view span, applicable to any object type.

I don't like adding more eyes unless absolutely necessary.

Quote
Add a minimal set of .refvars for each and every eye.  You may have to look at something via the one focus eye to populate the detailed refvars, but we could add a .reftypeeyeX and perhaps one or two others general purpose refvars for each and every eye for better vision parallelism.

I don't see this scaling very well.

Quote
Add per eye .focallength.  Think of this as a sensitiity adjustment which optomizes the eye for a specific distance range.  Done right, eyes with a long focal length would ignore objects at close range and vice versa.  Sophsiticated vision systems could optomize focal length for maximum distance change sensitivity over a given range for better tracking or velocity matching.  Overlapping eyes of different focal length become very usefull....  We could could even extend the vision range some - the current range can be thought of as the limit of the default focal length setting....

Focal length seems more suited to information gathering than to occlussion issues.  That is, give only vague information on an object unless the focal length is right.  But I see this as more limiting what bots can see at first glance and less adding any new features.

Quote
Another idea is to have a .eyeXmode.  Bots could put each eye into one of set of modes, which could filter out certain object types - a mode to see only bots, a mode to see only shots, etc.  The same mechanism coudl be used to change how eyes work.  FOr example, a mode where isntead of an eye reading how far or close somethign is, it woudl read what percent of the eye is occluded...

Ah, looks like you just misunderstood what I was driving at.  This is exactly what I was thinking.  Have each eye's mode be settable independantly of each other.

Offline EricL

  • Administrator
  • Bot God
  • *****
  • Posts: 2266
    • View Profile
Seeing shots
« Reply #8 on: August 20, 2007, 10:43:13 PM »
Quote from: Numsgil
Ah, looks like you just misunderstood what I was driving at.  This is exactly what I was thinking.  Have each eye's mode be settable independantly of each other.

Ah, yes indeed.  I was thinking you wanted to add 9 eye sysvars for every new object type!  Great minds think alike...
Many beers....