Okay, well, let's break it down.
First, why do we want sound? That is, what are the use cases we expect for it?
There are two primary uses I can think of:
- Locating prey
- Calling for mates, giving warning calls, etc.
Locating prey is a
passive mechanism. It involves mostly listening for the sounds that a prey creature makes, and using it to locate them. Not really the production of sound on
purpose (ecolocation notwithstanding). That's not unique to sound, of course. Vision and smell have the same role. Vision provides the most accurate long range information, but requires a relatively sparse environment, so it's less useful, say, underground. Smell provides fairly good information and doesn't require line of sight, but it's very susceptible to a time delay, currents/winds, etc. Sound provides very little information in and of itself, but gives you a fairly precise location for the source, is essentially instantaneous, and can be overloaded with additional information on purpose (eg bird songs).
Speaking of which, that's the second item. This is an
active mechanism. For anyone in the US, there's a good
David Attenborough documentary on Netflix that talks about bird calls. There's a couple key mechanisms at play here. First, high pitched noises are harder to locate precisely. This is because they tend to reflect off of different sources, masking their origin. But they're also quite short ranged because of that. They're useful for "anonymous" messages. Danger calls, etc. Lower pitched noises can be located much easier, and travel much further. They're useful for making yourself conspicuous. Either intimidating others, or distraction, or helping others locate you because you're lost. So pitches become important.
Also, there's a time component to noise. That is, bird song is not just a single straight pitch. It goes up and down and generally caries on a "tune". Information is encoded in this time signal. Our brains (and bird brains) do a good job of decoding this tumult of noises in to its constituent pitches as functions of time. The amount of information that can be conveyed is limited, however, by the acoustical properties of the environment. If the environment tends to muffle sounds, calls have to be short and loud. If the environment can carry sounds better, the calls can grow in complexity and tempo.
Next, let's look at the physics of sound. First, what we think of as "sound" is actually a time varying 1 dimension signal. That's it. The full sound signal is all garbled together in to that single dimension. Our brains are just able to decompose it in to pitches, using something probably like a
discrete fourier transform. That signal propagates through air (or water) as a
longitudinal wave. Importantly, it can echo. More importantly, it can diffuse, much like light can, and create "ambient" noise. This is high pitch noises are hard to locate. It's the same way that it's not always obvious where a light is coming from in a well lit room.
Also, to continue the light analogy, there's such a thing as an
acoustic shadow.
Continuous sound is essentially a production of a continuous disturbance, which caries energy over time (basically, see
this article). Importantly, note that pitch and energy are not related. Low pitch sounds
carry energy more efficiently, because it won't get reflected or absorbed by things as readily. But there's nothing that makes high pitch easier or hard to produce than low notes.
Very low notes we don't even perceive as sound anymore. It feels like a vibration. In a fluid model, it would simply be a disturbance, which you could detect as changes in the surface velocity of the fluid at a boundary over time. Of course, if we're thinking about something like Darwinbots where there's a distinctive, discrete "step", the
Nyquist frequency means we can't really detect any sounds naturally in the fluid that have a frequency higher than .5 Hz. Ah well. But for passive predator-prey hunting, detecting disturbances is all that's required, and that information is well carried by the changes in fluid, which is a natural consequence of a simulated fluid system.
For sending signals, though, the fluid system isn't sufficient. Simulating a full sound system is tricky, because of the reflection/echo. It basically puts it on par with a ray tracer in that regard. Yuck.
So I don't have a complete proposal or anything, but that's the sort of stuff I'd think about for a sound system. More of a brain dump than anything at this point. But it's Friday and I'm tired