Code center > Suggestions

3D!

<< < (4/5) > >>

Zelos:
it aint possible, how are you supposed to visual a dimonsion uve never seen or experienced? it aint possible. we humans cant in anyway visual the 4th spactial Dimension.

Numsgil:
The mathematics exists to understand things like collisions, etc. so clearly the problem is in conveying the information to the observer.

What is available on the screen to denote spatial dimensions?  Well, you have x,y, color, and distortion.

Clearly you can devise an arbitrary set of rules for how to interpret 4D, the same way you can create an arbitrary set of rules to interpret 3D on the 2D screen.

Okay, imagine this:

x,y position -> x,y position of graphic
size > z position
Darkness -> w (the 4th dimension variable).  Say a bot's color is red.  A low w value would result in a deep crimson.  A high w value is pinkish.

In this case, a collision could be predicted by you, the viewer, if the bots:

1.  are near each other on the screen
2.  are close in size
3.  are close in color.

k0zm0:
I think the most of our problems are eyes. How would a bot know what is he looking at.
Lets say that a bot is moving on standard x, y axis. What happens when he sees a bot on the z axis. How many eyes would be needed, to bot successfuly turns to a bot on z axis.
 If we have eyes from 1 to 9, then at least so many would a bot need on the z axis, 17 eyes!!!!!

--- Code: ---          
           9
           8   X
           7
           6
   1 2 3 4 5 6 7 8 9
           4
           3
           2
           1
--- End code ---


X- bot in 3d space
numbers- robot view

the bot is on the x: eye7,  y:eye8
                       

But how would a bot know there is a actualy a object there?
 Or we could just redo eyes in some other way?
Also there would need to be a function how many object it sees.

Numsgil:
I was thinking something along the same lines.

the regular eyes scan for everything in the regular 2D plane, but extended upwards along the Z axis.  So this means regular eyes can still be manipulated to turn in the right xy direction.

Another set of eyes works perpindicular to this, allowing a .aimpitch command to help the bot orient itself.  Both sets of eyes work in strict 2D sense, but since they're perpindicular to each other they end up forming a grid.

But you're right, that's another 9 eyes.

The result is the bots' field of vision looks like a square cone with the bot at the point.

Here's a simple turning gene I imagine in such a system:

cond
*.eye4 *.eye6 !=
*.eye4Z *.eye6Z != or
start
*.eye4 .*.eye6 sub .aimdx store
*.eye4Z *.eye6Z sub .aimpitch store
stop

There's undoubtedly some math I'd have to look into for all this as well.

PurpleYouko:
You would also have to make the collision detection routines work in a nested fashion (first x, then y then z). This would cost a whole lot in sim speed.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version