Code center > Suggestions

Seeing shots

(1/2) > >>

EricL:

--- Quote from: googlyeyesultra ---Hmm, another way to do it would be to predict where and when the returning shots would be, based on the velocities, store that in a queue, and go run around and grab them while constantly firing.
--- End quote ---
Or we could make shots visible......  Woudl not be difficult.  .reftype would indicate they are shots and not bots or shapes (the two things bots can see already).  There would be a perf impact, proportional to the number of shots in the sim....

googlyeyesultra:
I'd rather not take the perf inpact. Besides, you'd either need two sets of eyes or you'd end of obscuring your own vision with your own shots. I'll be happy if you give me some trig functions.

EricL:
I do plan to make shots visible at some point.  It will be optional, like it is with shapes, so the price w.r.t. perf will be something people can choose to pay or not.  There will be no perf impact if people elect not to enable shot visiblity.

You will indeed be able to see your own shots.  Refvars will allow you to distinguish them from other's.  If you want finer eyesight resolution so as to see past your own shots at what you are shooting at, well, that is what eyewidth is all about.  If people find that insufficient, we could add a per eye focal length, something that might be usefull in any respect.

A little philosophy.  Incomplete physics bother me.  That fact that shots are invisble, that ties are as well, that there is no collision detection between ties and bots or ties and shots or ties and shapes, that ties have no fluid resistance, that our current movement paradym defines the laws of physics w.r.t angular momentum and so on...  these limititations in the physics limit the richness of what can be designed or evolved.  Yes, there is a perf impact to implementing all of these things, but cpu cycles get cheaper as time goes by.  I vote that over time, we spend the majority of that currency on increasing environmental richness, which I submit includes in part, completing the physics of existing morphological artifacts.

In parallel, I will take a look at adding some trig functions.  If you can provide specifics and prioritize what you would like to see, that would help.  If it isn't too much work, you may well see it relatively soon, probably sooner than visible shots.

googlyeyesultra:
My top priorities are entering the angle and the hypotenuse, and returning one of the other two sides (aka, sine and consine). Either that, or some sort of angle to slope and slope to angle command would be alright.

Numsgil:
For sine and cosine, have them scaled so that a full circle is 1256 units.  Have them return values that are also * 1256.  That way the input and output are scaled to each other and the rest of the sysvars.

For vision, seeing shots is great.  I don't mind perf impacts, especially when they can be optimized to O(nlogn).  However, this falls into a broader category of head scratchers.  Even when you can change your vision radius, it's theoretically possible to have some shot block your vision, or even some very small bot if it's strategically placed.  The math is all set up as if the bots are all in the same plane, but 3D.  Which means larger bots would be visible even if there's a smaller bot or shot in front of them since they still rise above the horizon.  This is how I reason that bots can easily tell how large the other is, even if they're partially obscured from each other.  In this view, a shot blocking the sight of another bot is absurd.

I'm running into this problem as I design the workings for the next version as well.  The current system that has several eyes trained at different spots that each only return the closest object.  While this might work as the system gets more complex, I think it's ill suited for it.

For the new versions I'm trying to find a way to have each visible object be given it's own processing thread.  For the current 2.X program, the only reasonable approach I can think of is to have the ability to devote an eye to a specific object.  One eye only detects shapes.  The other only detects bots.  Another only detects shots.  Another ties, etc. etc.


--- Quote ---that ties have no fluid resistance
--- End quote ---
Ugh, this was such a headache.  Carlo originally had some code set up for this, but it was extremely simplistic.  I tried solving it more generally, but since you can't define a tie as anything except its end points (the bots) the math is incredibly messy.  Which is part of the reason for the new version I'm moving to bots being the ones that are capsule shaped, and connections being single points.

Navigation

[0] Message Index

[#] Next page

Go to full version