Author Topic: Implementing Sound  (Read 7936 times)

Offline rsucoop

  • Bot Destroyer
  • ***
  • Posts: 166
    • View Profile
Implementing Sound
« on: April 11, 2008, 01:06:38 AM »
Craete a point and call it the mouth, say its right where the eye/gun is. Everytime a value is stored in it other than 0, a curved line is shot forward at speed of something. When it hits another bot it does 2 different things. If sound hits back, only half of the information can be heard, if it hits the front it hears all the information. The curved packet would require a fwe out vars, so they can be read when hit. I say 20. But limit 10 of them to 1s and -1s (or 0) only. This would give the ability to create language for greater distances, but should be somehow linked to multi-bots. Perhaps upon collision both out vars in each bot are stored into the packet of the curve, and shot out of either bot with the 1 in the mouth var. I think this would open the doors for a truely intelligent species.

Offline shvarz

  • Bot God
  • *****
  • Posts: 1341
    • View Profile
Implementing Sound
« Reply #1 on: April 11, 2008, 06:18:08 PM »
You realize, of course, that sound does not spread as a one-directional squigly line
"Never underestimate the power of stupid things in big numbers" - Serious Sam

Offline rsucoop

  • Bot Destroyer
  • ***
  • Posts: 166
    • View Profile
Implementing Sound
« Reply #2 on: April 11, 2008, 06:20:16 PM »
Quote from: shvarz
You realize, of course, that sound does not spread as a one-directional squigly line
Yes, it be shaped like ) and head -> respectively. Its more of a percentage of effetiveness to transmit/recieve. A mouth projects outward, so the sound is focused in a central forward area, those behind can hear some things.

Offline rsucoop

  • Bot Destroyer
  • ***
  • Posts: 166
    • View Profile
Implementing Sound
« Reply #3 on: April 15, 2008, 10:53:45 PM »
Eric, any ideas about this?

Offline EricL

  • Administrator
  • Bot God
  • *****
  • Posts: 2266
    • View Profile
Implementing Sound
« Reply #4 on: April 15, 2008, 11:52:43 PM »
I'd have some design questions to start:

What happens when multiple bots are speaking at the same time?  Do the sound vars of a third or Nth bot represent a summation of the values spoken by the others?   Do values attenuate over distance?  Over what distance does sound travel (please say the same as vision).   I assume we use some standard distance/cycle ratio as the sound propagation rate.  If a bot is moving counter to the sound wave, does it miss words do to wave compression?

Implimenting this might prove computationally expensive, along the same lines as vision.  I'd have to keep a per bot buffer of the words spoken over the last N cycles and compute which words from which other bots strike each bot each cycle.  You can foget about things like reflection and attenuation due to shape corners, etc.  Too hard for now.

I guess I would want to dive a little deeper into the core functionality we want acheive with this before jumping into an implimentation.  For example, I've been toying with the idea of bots being able to gather and store another bot's ID, say as a new refvar  "refid", which gets populated along with the other refvars and trefvars.  A bot could then grab this and use it for various things including getting the refvars of a bot without looking at it (as long as it is within visible range) or using memloc/memval.    

If the goal is to create a "hey you, look at me over here" capability, we might simply add a sysvar that provides a bot with the ID of the nearest bot that happens to be looking at it.  

If the goal is provide a richer communication mechanism, I'd want to discuss how to enhance the in/out pairs first.  Requiring a bot focus on another to "hear" it is a huge compuation saver, but a bot could aim it's ears at another specific bot so to speak using the ID above and be able to receive the .in values from a bot without looking at it...
« Last Edit: April 15, 2008, 11:54:47 PM by EricL »
Many beers....

Offline rsucoop

  • Bot Destroyer
  • ***
  • Posts: 166
    • View Profile
Implementing Sound
« Reply #5 on: April 16, 2008, 12:24:36 AM »
Quote from: EricL
I'd have some design questions to start:

What happens when multiple bots are speaking at the same time?  Do the sound vars of a third or Nth bot represent a summation of the values spoken by the others?   Do values attenuate over distance?  Over what distance does sound travel (please say the same as vision).   I assume we use some standard distance/cycle ratio as the sound propagation rate.  If a bot is moving counter to the sound wave, does it miss words do to wave compression?

Implimenting this might prove computationally expensive, along the same lines as vision.  I'd have to keep a per bot buffer of the words spoken over the last N cycles and compute which words from which other bots strike each bot each cycle.  You can foget about things like reflection and attenuation due to shape corners, etc.  Too hard for now.

I guess I would want to dive a little deeper into the core functionality we want acheive with this before jumping into an implimentation.  For example, I've been toying with the idea of bots being able to gather and store another bot's ID, say as a new refvar  "refid", which gets populated along with the other refvars and trefvars.  A bot could then grab this and use it for various things including getting the refvars of a bot without looking at it (as long as it is within visible range) or using memloc/memval.    

If the goal is to create a "hey you, look at me over here" capability, we might simply add a sysvar that provides a bot with the ID of the nearest bot that happens to be looking at it.  

If the goal is provide a richer communication mechanism, I'd want to discuss how to enhance the in/out pairs first.  Requiring a bot focus on another to "hear" it is a huge compuation saver, but a bot could aim it's ears at another specific bot so to speak using the ID above and be able to receive the .in values from a bot without looking at it...

Ok, in situations where sounds overlap, a harmonic is created. So a harmonic array can be created to support say three overlapping sounds at a time. The information would be stored as independent to the other vars. The speed of sound:

In a fluid the only non-zero stiffness is to volumetric deformation (a fluid does not sustain shear forces).

Hence the speed of sound in a fluid is given by

 
where
K is the bulk modulus of the fluid

I would suggest using the same equation to slow the speed of the sound, thus reducing the ammount of information recieved, until it desolves. I would say make it last three times the maximum distance of sight, unless you want to emulate whale songs; but this could mean a bulk of refvars in the ear at once, and we could use disadence to remove certain bits of infromation.

Frequency Emulation:

So the value stored in a wave is a frequency, 1 = 1 hz, the lowest possible audible tone of a Darwinian. The lis of possible frequencies would be limited to DNA, and system specs.
I would suggest limiteing the frequency list to its maxmum, but audible tones should be somewhat lower since the higher frequencies are considered harmful in nature. The rules of physics can be aplie  to such a system, allow for sound to happen naturally, in numbers

Solution to Overlapping Sound Waves:

Say we have a harmonic array, 1, 3, 4. 4 is even, but in this case its also the only one not a prime, so it can be considered a weaker frequency, it will become disanant under the following conditions. Say we have the exact same harmonic array overlap this one, well the values wont change, but will effectively combine the two wave ranges to double their range, due to amplification. If the values are reversed, say 4, 3, 1 the 4's would cancle, as they are not in the sound scheme for swaps, and 3 and 1 would switch to the 1 and 3. Its simple for frequencies, differences of 10times is considered an octave, and a 1-3 is something close to a 2nd minor in music, common to songs of natives, a step from 3-4 is actually some mathematical thing that removes part of the tone in either, causing slight alterations +/- by 10%. Inversions of waves tend to cancle, depending on the magnitude and direction, but I say just use it for Overlapping waves with same direction and speed only. Using the same formula for Collisions, the sound wave colliding into another can be altered to change its speed, range and harmonic array/frequency.

I think this shouldn't be as difficult as everyone thinks. Its like a long range caller, that's faster than shots, and doesn't have to be visible, so that should curtail most usage, and as for the numbers. I think by simply combingning overlapping (or closely touching within 50% of eachother's space) will reduce the amount of bolk information being transmitted, and still provide some information to the listener. I also think if the combined wave contained the ID of all bots inside the Harmonic combination be included, then its a matter of genetic integrity and propper speech/learning routines to know and interprut what's heard, otherwise its like static.

Edit: as for moving ears, I say fix them to a point on the body so it requires a rotation of the bot, then they can aim there eyes if they want, but the bot cannot take refvars from the eyes and ears at once. Similar to the bumb feature, only the information contained in a wave can be recieved, understood, and acted upon, while being able to look at another bot, and send/recieve information. The sound would enter the ear and disapear, so in that instance the bot must store the values, or lose the information. I think that there has to be a way to override the ears, to emulate deafs, and to prevent combat situations where a bot screams at another and it cuases the bot to go blind in combat... When a wave hits a bot, consider the Harmonic array to exist as thirds of the wave. Lower third 1st number, Middle 2nd number and Tp is 3rd number. As each third/number hits a bot, it goes away, and the rest go on until out of energy/mnomentum or hit a bot. THis could mean that some information may be lost, but could be effected for group theory, since it could hit multiple bots, and the recipients could use any of the numbers in that refvar included in the wave, to alert others, or act another way when more of the information in the wave is found later by another bot that recieved a bit of the wave.

1 ear or 2 I ask...
« Last Edit: April 16, 2008, 12:32:52 AM by rsucoop »

Offline rsucoop

  • Bot Destroyer
  • ***
  • Posts: 166
    • View Profile
Implementing Sound
« Reply #6 on: April 16, 2008, 07:21:27 PM »
I assume this is impossible Eric?

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Implementing Sound
« Reply #7 on: April 17, 2008, 03:37:46 AM »
I don't think we need to necessarily implement a realistic sound system.  Darwinbots, at its heart, is fairly abstract.  And I'm not sure we want bots to be doing Fourier transforms in their DNA

Sound could work as an instantaneous broadcast signal. Something like the in/out pairs.  Each pair would be a frequency.  The value a bot "hears" in an in is a weighted average of the value another bot is shouting modified by the volume at which it's shouting, and its distance away from the listener.  Also, a bot gets a gradient vector (basically an angle) towards which the volume is increasing.  That way the bot can turn to look at something that's making noise, or turn in the other direction and run away from something that's making noise.

Real noise uses an inverse square law, IIRC, for volume, which isn't something we want to do.  Probably use a kernel that attenuates to 0 after a certain distance (say, the max eye distance).

We could introduce involuntary sounds as well.  Maybe voluntary motion produces noise, as does hitting walls and other bots, etc.  And if we really wanted to get fancy, we could actually feed data in to the speakers of a computer and let people listen to the noises of a certain point in the world.

I definitely think we want to avoid any sort of combining of different frequencies in to a single value.

Offline rsucoop

  • Bot Destroyer
  • ***
  • Posts: 166
    • View Profile
Implementing Sound
« Reply #8 on: April 17, 2008, 10:25:21 AM »
The inverese square law applies to the idea that their is no interference. SO we dont have to use it, plus its running through water. Maybe browian motion should produce some fuzzy sounds. But ok.

The idea of combination was to emulate how harmonics work irl. But if we dont want to, then ok.

Offline Testlund

  • Bot God
  • *****
  • Posts: 1574
    • View Profile
Implementing Sound
« Reply #9 on: April 17, 2008, 10:39:58 AM »
If you decide to implement this in DB I hope you put a checkbox in the GUI so I can chose not to use it, as single cellular organisms are completely deaf in nature.
How about something simpler, like vibrations instead, which just have different strengths depending on the size of the object (shape or bot). Still, I don't know what the usefulness would be,  but it would at least be closer to realism.  
The internet is corrupt and controlled by criminally minded people.

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Implementing Sound
« Reply #10 on: April 17, 2008, 01:57:24 PM »
Sounds are vibrations.  Just at different Hertz.  And certainly unicellular critters have other means of gaining information about the ones around them.  They're not some sort of stupid animal mindlessly bumbling through the water.

Offline Testlund

  • Bot God
  • *****
  • Posts: 1574
    • View Profile
Implementing Sound
« Reply #11 on: April 17, 2008, 02:38:01 PM »
I meant the different kind of heartz that creatures can feel, not the kind you need specialised senses to pick up. As I understand it single cellular organisms can react to things they touches or smells, but they can't hear or see, right? ..other than light or darkness in some species. Otherwise they appear pretty mindless to me when I've looked into a microscope or in TV-programs about such organisms.
The internet is corrupt and controlled by criminally minded people.

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Implementing Sound
« Reply #12 on: April 17, 2008, 04:40:56 PM »
I haven't been able to find much discussion about what single celled organisms can and can't sense, but when biology is concerned I tend to err on the side of giving them credit for abilities.  I am constantly astounded by the sophistication of even simple organisms.

Offline Testlund

  • Bot God
  • *****
  • Posts: 1574
    • View Profile
Implementing Sound
« Reply #13 on: April 17, 2008, 06:29:50 PM »
I agree there is lot to get surprised about in biology, and as a matter of fact I decided to do a search on Google to see if there was anything written about this. I found this about bacteria and sound:

http://www.jstage.jst.go.jp/article/jgam/44/1/44_49/_article

...and this article with a link to a pdf document with remarkable information about complex behavior in bacteria.

http://mnemosynosis.livejournal.com/10810.html

So if you want to implement the ability for bots to percieve and transmit sound I guess it might not be too crazy an idea after all.  
The internet is corrupt and controlled by criminally minded people.

Offline EricL

  • Administrator
  • Bot God
  • *****
  • Posts: 2266
    • View Profile
Implementing Sound
« Reply #14 on: April 17, 2008, 07:34:22 PM »
Quote from: rsucoop
I assume this is impossible Eric?
Nothing is impossible, but my time to work on DB is finite and the list of things to impliment is long.  Realistically, I would not get to this in the next 6 months even if there were no open questons.   People are always welcome to create their own forks of the code if they don't want to wait for me - others have done this in the past and I would be happy to assist such efforts, as well as integrate working code back into the main line if and when appropriate.

I'm all for adding some functionality in this general area, but there are a bunch of implimentation issues with what you propose.   I'll just point out one and that is that the computational hit of anything that has an N^2 relationship between bots such as vision or hearing is proportional to the distance over which it acts.  The farther sounds propagates, the more bots might be emitting a sound a given bot can hear, the more sound interactions that have to be integrated for each bot and so on.   As such, I'm pretty opposed to implimenting anything (with the possible exception of the suggestion below) that operates over a range longer than the bucket dimension (4000) which is just a tiny bit farther than how far a bot with the largest possible radius can see with the narrowest possible eye.

Another pet peeve of mine, which is not specific to this suggestion, is that I hold the opinion that not only is it not necessary to impliment real world physics in all cases for our digital organisms, but that to do so can be counter productive to evolving complexity.   I see no reasons other than appealing to human intuition and creating evironmental richness that the DB world must emulate the real world and environmental richness I would argue can often be achievied more effeciently through the implimentation of more "native" digital means for bots to interact with each other and the environment and NOT by emulating real world physics.  Others disagree with me on this and I won't belabor it here, but it is another reason why I am not keen to spend a bunch of time myself writing code whose main purpose is to simulate a real world phenonomem.   As I say above, I would much prefer to focus on what interactions we are trying to acheive - what bot-to-bot or bot-to-environment scenerios we are trying to allow - and let the features and physics follow from that.

Do we want a "hey you, I'm over here!" mechanism for one bot to shout at another when one or both are not looking in the right direction or for a bot to "hear" the bot nearest to them or are we after a more general purpose means for bots to gather more data about their surroundings than vision alone currently allows?  If the former, I might offer up a simple suggestion (this is off the top) such as populating a bot's refvars and/or in/out pairs with data from the nearest bot (no matter what the range) when there is nothing in the focus eye, essentially allow him to "hear" the nearest bot and gather data about it without seeing it.  He could turn that direction, move towards it, run away from it, communicate with it, etc.   We might choose to disallow writing to a bot's memory until he can see it to prevent long range memory attacks.....

If the later - providing bots better means to gather data about their surrorundings - than my inclination is to build upon vision since it seems to me a natural place to do that.
Many beers....