Code center > Suggestions
Implementing Sound
rsucoop:
Craete a point and call it the mouth, say its right where the eye/gun is. Everytime a value is stored in it other than 0, a curved line is shot forward at speed of something. When it hits another bot it does 2 different things. If sound hits back, only half of the information can be heard, if it hits the front it hears all the information. The curved packet would require a fwe out vars, so they can be read when hit. I say 20. But limit 10 of them to 1s and -1s (or 0) only. This would give the ability to create language for greater distances, but should be somehow linked to multi-bots. Perhaps upon collision both out vars in each bot are stored into the packet of the curve, and shot out of either bot with the 1 in the mouth var. I think this would open the doors for a truely intelligent species.
shvarz:
You realize, of course, that sound does not spread as a one-directional squigly line
rsucoop:
--- Quote from: shvarz ---You realize, of course, that sound does not spread as a one-directional squigly line
--- End quote ---
Yes, it be shaped like ) and head -> respectively. Its more of a percentage of effetiveness to transmit/recieve. A mouth projects outward, so the sound is focused in a central forward area, those behind can hear some things.
rsucoop:
Eric, any ideas about this?
EricL:
I'd have some design questions to start:
What happens when multiple bots are speaking at the same time? Do the sound vars of a third or Nth bot represent a summation of the values spoken by the others? Do values attenuate over distance? Over what distance does sound travel (please say the same as vision). I assume we use some standard distance/cycle ratio as the sound propagation rate. If a bot is moving counter to the sound wave, does it miss words do to wave compression?
Implimenting this might prove computationally expensive, along the same lines as vision. I'd have to keep a per bot buffer of the words spoken over the last N cycles and compute which words from which other bots strike each bot each cycle. You can foget about things like reflection and attenuation due to shape corners, etc. Too hard for now.
I guess I would want to dive a little deeper into the core functionality we want acheive with this before jumping into an implimentation. For example, I've been toying with the idea of bots being able to gather and store another bot's ID, say as a new refvar "refid", which gets populated along with the other refvars and trefvars. A bot could then grab this and use it for various things including getting the refvars of a bot without looking at it (as long as it is within visible range) or using memloc/memval.
If the goal is to create a "hey you, look at me over here" capability, we might simply add a sysvar that provides a bot with the ID of the nearest bot that happens to be looking at it.
If the goal is provide a richer communication mechanism, I'd want to discuss how to enhance the in/out pairs first. Requiring a bot focus on another to "hear" it is a huge compuation saver, but a bot could aim it's ears at another specific bot so to speak using the ID above and be able to receive the .in values from a bot without looking at it...
Navigation
[0] Message Index
[#] Next page
Go to full version