Darwinbots Forum
Code center => Darwinbots3 => Topic started by: Botsareus on September 05, 2014, 01:47:01 PM
-
So this approach like any has advantages and disadvantages:
advantages: Fast
disadvantages: Exponentially losing accuracy with distance
I envision two new sysvars, one to store the 'flavor' of the sound and another to store the 'traveling distance' of a sound. Charge exponentially more energy for more powerful sound.
You will find a lot of ugliness in my code, but I just wanted to get the idea out as fast as possible.
Option Explicit
Private Type Point
X As Single
Y As Single
End Type
Private Type Rect
p1 As Point
p2 As Point
End Type
Private Type Velocity
ang As Double
speed As Double
End Type
Private Type SoundPoint
location As Point
vel As Velocity
End Type
Private Type SoundCopy
location As Point
vel As Velocity
active As Boolean
End Type
Private Type RectSections
Hit As Boolean
TopLeft As Boolean
TopRight As Boolean
End Type
Private Square As Rect
Private ClientBorders As Point
Private soundpoints(15) As SoundPoint
Private soundcopys(15) As SoundCopy
Private mousepoz As Point
Private Const pi As Double = 3.14159265358979
Private wavestrength As Byte
Private waveflawor As Byte
Private Sub Form_Load()
Setup
Simulate
End Sub
Private Sub Simulate()
Do
CalculatePoints
FormatScreen
DrawRect
DrawWave
DoEvents
Loop
End Sub
Private Function CalcRectSections(pt As Point) As RectSections
CalcRectSections.Hit = pt.X > Square.p1.X And pt.X < Square.p2.X And pt.Y > Square.p1.Y And pt.Y < Square.p2.Y
'The following is real ugly because we assume it is a square in middle screen
CalcRectSections.TopRight = pt.Y < pt.X
CalcRectSections.TopLeft = pt.Y < (2000 - pt.X)
End Function
Private Sub DrawWave()
Dim a As Byte
Dim b As Byte
For a = 0 To 15
b = (a + 1) Mod 16
Line (soundpoints(a).location.X, soundpoints(a).location.Y)-(soundpoints(b).location.X, soundpoints(b).location.Y), wavecolor
'The following is real ugly, because we assume we only going to have one copy
If soundcopys(b).active And (Not soundcopys(a).active) Then
Line (soundpoints(a).location.X, soundpoints(a).location.Y)-(soundcopys(b).location.X, soundcopys(b).location.Y), wavecolor
End If
If soundcopys(b).active And soundcopys(a).active Then
Line (soundcopys(a).location.X, soundcopys(a).location.Y)-(soundcopys(b).location.X, soundcopys(b).location.Y), wavecolor
End If
If (Not soundcopys(b).active) And soundcopys(a).active Then
Line (soundcopys(a).location.X, soundcopys(a).location.Y)-(soundpoints(b).location.X, soundpoints(b).location.Y), wavecolor
End If
Next
End Sub
Private Function wavecolor() As Long
Select Case waveflawor
Case 0: wavecolor = RGB(255 - wavestrength, 255 - wavestrength, 0)
Case 1: wavecolor = RGB(0, 255 - wavestrength, 255 - wavestrength)
Case 2: wavecolor = RGB(255 - wavestrength, 0, 255 - wavestrength)
End Select
End Function
Private Sub CalculatePoints()
WaveReset
WaveUpdate
WaveSquareInteractions
End Sub
Private Sub WaveSquareInteractions()
Dim a As Byte
Dim result As RectSections
For a = 0 To 15
With soundpoints(a)
result = CalcRectSections(.location)
If result.Hit And .vel.speed = 3 Then 'I am using the speed itself to figure out where the wave originated (ugly)
If result.TopLeft And result.TopRight Then
makecopy a
.vel.ang = pi * 2 - .vel.ang
End If
If (Not result.TopLeft) And (Not result.TopRight) Then
makecopy a
.vel.ang = pi * 2 - .vel.ang
End If
If (Not result.TopLeft) And result.TopRight Then
makecopy a
.vel.ang = pi - .vel.ang
End If
If result.TopLeft And (Not result.TopRight) Then
makecopy a
.vel.ang = pi - .vel.ang
End If
End If
If (Not result.Hit) And .vel.speed = 1.5 Then
.vel.speed = .vel.speed * 2
End If
End With
Next
End Sub
Private Sub makecopy(a As Byte)
'The following is real ugly, because we assume we dealing with only one shape
soundcopys(a).active = True
soundcopys(a).vel = soundpoints(a).vel
soundcopys(a).location = soundpoints(a).location
soundcopys(a).vel.speed = soundcopys(a).vel.speed / 2
End Sub
Private Sub WaveUpdate()
Dim a As Byte
For a = 0 To 15
With soundpoints(a)
.location.X = .location.X + Cos(.vel.ang) * .vel.speed
.location.Y = .location.Y + Sin(.vel.ang) * .vel.speed
End With
With soundcopys(a)
If .active Then
.location.X = .location.X + Cos(.vel.ang) * .vel.speed
.location.Y = .location.Y + Sin(.vel.ang) * .vel.speed
End If
End With
Next
End Sub
Private Sub WaveReset()
Dim a As Byte
'Increment wave position
wavestrength = wavestrength + 1
If wavestrength = 255 Then
'if wave faded out start a new one
wavestrength = 0
waveflawor = Int(Rnd * 3)
For a = 0 To 15
soundcopys(a).active = False
soundpoints(a).location = mousepoz
If CalcRectSections(mousepoz).Hit Then
soundpoints(a).vel.speed = 1.5
Else
soundpoints(a).vel.speed = 3
End If
soundpoints(a).vel.ang = a / 15 * pi * 2
Next
End If
End Sub
Private Sub FormatScreen()
Cls
Dim ClientSize As Point
ClientSize.X = Width - ClientBorders.X
ClientSize.Y = Height - ClientBorders.Y
If ClientSize.X > ClientSize.Y Then
ScaleHeight = 2000
ScaleWidth = 2000 * ClientSize.X / ClientSize.Y
ScaleLeft = (ScaleHeight - ScaleWidth) / 2
ScaleTop = 0
ElseIf ClientSize.X < ClientSize.Y Then
ScaleWidth = 2000
ScaleHeight = 2000 * ClientSize.Y / ClientSize.X
ScaleTop = (ScaleWidth - ScaleHeight) / 2
ScaleLeft = 0
Else
ScaleWidth = 2000
ScaleHeight = 2000
End If
End Sub
Private Sub DrawRect()
Line (Square.p1.X, Square.p1.Y)-(Square.p2.X, Square.p1.Y), vbWhite
Line (Square.p1.X, Square.p1.Y)-(Square.p1.X, Square.p2.Y), vbWhite
Line (Square.p2.X, Square.p2.Y)-(Square.p2.X, Square.p1.Y), vbWhite
Line (Square.p2.X, Square.p2.Y)-(Square.p1.X, Square.p2.Y), vbWhite
End Sub
Private Sub Setup()
Show
ClientBorders.X = Width - ScaleWidth
ClientBorders.Y = Height - ScaleHeight
BackColor = vbBlack
AutoRedraw = True
Square.p1.X = 800
Square.p1.Y = 800
Square.p2.X = 1200
Square.p2.Y = 1200
Width = 8000
Height = 8000
Caption = "Sound Demo"
Icon = Nothing
MousePointer = vbCrosshair
End Sub
Private Sub Form_MouseMove(Button As Integer, Shift As Integer, X As Single, Y As Single)
mousepoz.X = X
mousepoz.Y = Y
End Sub
Private Sub Form_QueryUnload(Cancel As Integer, UnloadMode As Integer)
End
End Sub
-
I wonder if it would be fast in reality.
-
I bet you Numsgil has faster and cleaner ways to implement my proposal above.
-
Numsgil, do you like my idea or not?
-
Okay, well, let's break it down.
First, why do we want sound? That is, what are the use cases we expect for it?
There are two primary uses I can think of:
- Locating prey
- Calling for mates, giving warning calls, etc.
Locating prey is a passive mechanism. It involves mostly listening for the sounds that a prey creature makes, and using it to locate them. Not really the production of sound on purpose (ecolocation notwithstanding). That's not unique to sound, of course. Vision and smell have the same role. Vision provides the most accurate long range information, but requires a relatively sparse environment, so it's less useful, say, underground. Smell provides fairly good information and doesn't require line of sight, but it's very susceptible to a time delay, currents/winds, etc. Sound provides very little information in and of itself, but gives you a fairly precise location for the source, is essentially instantaneous, and can be overloaded with additional information on purpose (eg bird songs).
Speaking of which, that's the second item. This is an active mechanism. For anyone in the US, there's a good David Attenborough documentary (http://www.netflix.com/WiPlayer?movieid=70063076&trkid=3325854) on Netflix that talks about bird calls. There's a couple key mechanisms at play here. First, high pitched noises are harder to locate precisely. This is because they tend to reflect off of different sources, masking their origin. But they're also quite short ranged because of that. They're useful for "anonymous" messages. Danger calls, etc. Lower pitched noises can be located much easier, and travel much further. They're useful for making yourself conspicuous. Either intimidating others, or distraction, or helping others locate you because you're lost. So pitches become important.
Also, there's a time component to noise. That is, bird song is not just a single straight pitch. It goes up and down and generally caries on a "tune". Information is encoded in this time signal. Our brains (and bird brains) do a good job of decoding this tumult of noises in to its constituent pitches as functions of time. The amount of information that can be conveyed is limited, however, by the acoustical properties of the environment. If the environment tends to muffle sounds, calls have to be short and loud. If the environment can carry sounds better, the calls can grow in complexity and tempo.
Next, let's look at the physics of sound. First, what we think of as "sound" is actually a time varying 1 dimension signal. That's it. The full sound signal is all garbled together in to that single dimension. Our brains are just able to decompose it in to pitches, using something probably like a discrete fourier transform (http://en.wikipedia.org/wiki/Discrete_Fourier_transform). That signal propagates through air (or water) as a longitudinal wave (http://hyperphysics.phy-astr.gsu.edu/hbase/sound/tralon.html). Importantly, it can echo. More importantly, it can diffuse, much like light can, and create "ambient" noise. This is high pitch noises are hard to locate. It's the same way that it's not always obvious where a light is coming from in a well lit room.
Also, to continue the light analogy, there's such a thing as an acoustic shadow (http://www.ndt.net/ndtaz/content.php?id=300).
Continuous sound is essentially a production of a continuous disturbance, which caries energy over time (basically, see this article (http://www.physicsclassroom.com/class/sound/Lesson-2/Intensity-and-the-Decibel-Scale)). Importantly, note that pitch and energy are not related. Low pitch sounds carry energy more efficiently, because it won't get reflected or absorbed by things as readily. But there's nothing that makes high pitch easier or hard to produce than low notes.
Very low notes we don't even perceive as sound anymore. It feels like a vibration. In a fluid model, it would simply be a disturbance, which you could detect as changes in the surface velocity of the fluid at a boundary over time. Of course, if we're thinking about something like Darwinbots where there's a distinctive, discrete "step", the Nyquist frequency (http://en.wikipedia.org/wiki/Nyquist_frequency) means we can't really detect any sounds naturally in the fluid that have a frequency higher than .5 Hz. Ah well. But for passive predator-prey hunting, detecting disturbances is all that's required, and that information is well carried by the changes in fluid, which is a natural consequence of a simulated fluid system.
For sending signals, though, the fluid system isn't sufficient. Simulating a full sound system is tricky, because of the reflection/echo. It basically puts it on par with a ray tracer in that regard. Yuck.
So I don't have a complete proposal or anything, but that's the sort of stuff I'd think about for a sound system. More of a brain dump than anything at this point. But it's Friday and I'm tired :D
-
This would be the most realistic:
detecting disturbances is all that's required, and that information is well carried by the changes in fluid, which is a natural consequence of a simulated fluid system.
A bot could give away a certain vibration depending on how it moves. If it spins in intervals it could be detected as a rhythm for species recognition etc.
-
Good writeup Numsgil, you hit both main points:
Locating prey
Calling for mates, giving warning calls, etc.
That clear advantage of sound imo over other informational methods is that sound travels trough 'stuff' and ends up on the other side. Usually with delay.
-
Also I want to add that sound as detection would basically be useless unless you also had varied visability on bots.
If a bot can camouflage itself visually, detecting it with sound would be useful, or if night turned off visual detection and left only sound detection.
Are you hoping to work this into a larger cooperation system? To be able to use sounds as group signals?
What I'm thinking is before you get fancy with the sound system you can think of what you want it to do instead, and treat it like shots (in DB2) that stop as they hit the sides, but branch out in a radial circle.
So
Noise 1 = mating call = detectable T/F
Noise 2 = cooperation call to same species = detectable T/F
Noise 3 = food found = detectable T/F
Noise 4 = bot locating ping = detectable T/F
Noise 5 = warning call = detectable T/F
etc.
Couple that with a passive listening mode that makes the bot go in the direction of detectable sounds. Which might bring up a problem, not sure if you play any submarine warfare games, but when an enemy sub pings you, you can only detect direction, not distance, until you have multiple location points. So that'd be something to consider in your sound simulation, do you want distance locating capabilities? Or only direction from detection? (one trick when pinged in a sub game is to just fire a torpedo straight down a ping, which is why I think bots could do the same)
It would need a form of species identification in it though. So X would have to know sounds came from X and not from Y, if you wanted them to cooperate, or not chase down the wrong species mating call.
If you treat sound like an expanding circle that ignore other bots and stop at the edges, you should be able to avoid having to come up with sound reflection, cancellation, just all the complexities that come in with sound/sound in water, and still have a functional "sound" simulation layer.
Though, I would love to see a http://en.wikipedia.org/wiki/SOFAR_channel ;)
-
Why not instead of shots, use something that works like the eye system?
-
Why not instead of shots, use something that works like the eye system?
The eye system would be a good example. I just used shots as an example because shots already have types. So this would be like sound types.
-1 shots are for hunting.
-2 shots, Energy for feeding.
-3 shots, Venom (Robot is immune to Venom from his own species).
-4 shots, Waste.
-5 shots, Poison in response to an incoming "-1 .shot". (Robot is immune to poison from his own species).
-6 shots, Body.
-7 shots, You cannot shoot viruses, but the .shflav of a virus is -7
-8 shots, Sperm
-
Hehe, I thought when people talk about the 'system' of something they refer to something more dynamic and complex.
-
Hehe, I thought when people talk about the 'system' of something they refer to something more dynamic and complex.
Are you saying the eye system isn't dynamic and complex? Or you'd prefer dynamic and complex sound?
-
Botsareus might want something like 2D raytracing.
-
Hehe, I thought when people talk about the 'system' of something they refer to something more dynamic and complex.
The best systems are ingenious in that the basic idea is quite simple.
-
Not really Spork, the current eye system has distance information we do not need and ray tracking is slow.
-
What we need is something that meets my requirements at the least. Something awesome. And something bio accurate.
-
So yea, a little like ray casting, except no directional data. Kinda like an inverse of the current I system.
-
Sound would be cool...but we'd need some more complicated storage of memory if we wanted to properly implement it...some sort of Memloclist so in a free memory location we could store a 'chain' of information instead of just 1 thing...
I'm still though routing for scent, and chemicals etc. (because if we added scent we'd need chemicals and nutrients around the map as well)
Offtopic: I just thought surely decayed corpses (so ones no longer on the map) should have some sort of energy boost to nearby plants?
-
I am looking at it from the perspective of balancing out the eye system.