Code center > Suggestions

Hyperspeed Mode

<< < (12/15) > >>

jknilinux:

--- Quote from: Numsgil ---Speed of light in a CA is quite slow.
--- End quote ---

What I meant is, if it won't have a performance impact since we're using the hashlife algorithm, the only problem will be for the bots. And if the bots' DNA execution runs at a similar rate to the speed of light, it won't affect the bots either. Remember, these sims run into quadrillions and quintillions of cycles quickly, so long as you use hashlife. If anything, the DNA execution might not be able to keep up with the speed of light in the sim. Alternatively, we could execute the bot's DNA every, say, 10 cycles instead of every cycle. That way the speed of light will be much faster for the bots.

Sorry Nums, I hope we're not annoying you 0:D

Numsgil:

--- Quote from: jknilinux ---Sorry Nums, I hope we're not annoying you 0:D
--- End quote ---

A little   But I don't mind too much.


--- Quote ---What I meant is, if it won't have a performance impact since we're using the hashlife algorithm, the only problem will be for the bots. And if the bots' DNA execution runs at a similar rate to the speed of light, it won't affect the bots either. Remember, these sims run into quadrillions and quintillions of cycles quickly, so long as you use hashlife. If anything, the DNA execution might not be able to keep up with the speed of light in the sim. Alternatively, we could execute the bot's DNA every, say, 10 cycles instead of every cycle. That way the speed of light will be much faster for the bots.
--- End quote ---

Hashlife for GoL can run in to crazy large numbers of cycles, but if you try to hashlife Darwinbots you're not going to get that large number of cycles.  Even if you made the physics free, DNA execution and other stuff (body management, dead/alive checks, etc.) would turn up as performance bottlenecks.  A super optimized CA running something as complex as Darwinbots might achieve maybe 1000 cycles/sec or something like that.  That's an educated guess on my part, though, so take it with a grain of salt.

But I don't think you could make it even that awesome.  See, GoL has binary states for the cells.  But you need more info than that for something like Darwinbots.  You need empty cell, cell for bot 1, cell for bot 2, cell for static geometry, etc.  GoL scales well because of common patterns you can find, but as you increase the number of states a cell can have that number quickly balloons.

So it would work, but I don't think it's necessarily the magic bullet you're making it out to be.  But you could always take Evolve 4.0 and modify the source to use hashlife techniques, and see the resulting speed differences first hand.

jknilinux:
OK, so here's a port of DB I'm thinking of:

a 2D grid running a cellular automata and, using the hashlife algorithm, will run at (with conservative estimates) 100 cps

A third dimension contains the actual bots and their DNA, with no totalistic cellular automata requirements. However, the bots move and interact in the cellular automata dimension. It's not so much a third dimension as a parallel second dimension.

Bots are, by default, a large cluster of cells (say, 100), but can change their shape and size to anything possible, regardless of if the shape is viable in the rules of the cellular automata, because any shape a bot takes on is stable

The speed of light is, somehow, fast enough compared to the bots DNA execution to be realistic. For example, let's assume we have a plant like A. minimalis that has a very small genome. Once the DNA has been assembled/compiled, 10 cycles passed by in the sim. So, the bots' DNA is executed once every 10 cycles. A more advanced bot, like seasnake, might take 100 cycles to finish assembling, at which point the bot will finally execute the DNA. Or, all DNA is executed every 100 cycles. There are many ways to speed up the speed of light compared to the bots, without a performance penalty.

Bots gain energy by "eating" cells

Bots can create structures out of cells. For example, an herbivore might create spaceships and fire them at still-lifes. The resulting chaos creates many new cells for it to feed on. Or, a bot firing a spaceship at an herbivore causes a larger spaceship to bounce back, similar to the shooting method we currently use in DB.

One important difference between this and evolve 4.0 is that the bots live in a much more "analog" world; the bots cannot occupy single cells, but must occupy at least, say, several hundred. I think one of the major drawbacks with E 4.0 is that most bots were made up of just dozens of cells.

Basically, if you want it summarized in a single sentence, it'll combine the interesting, analog environment of DB with the speed of a CA.

I'm wondering if you think this is a viable project, and if it will fail like E 4.0 (last I heard, it was abandoned). How difficult is this? How difficult is it to merge completely different projects, such as DB and E 4.0, into a single program? Why doesn't evolve 4.0 run at 1000 cps?

Numsgil:

--- Quote ---I'm wondering if you think this is a viable project, and if it will fail like E 4.0 (last I heard, it was abandoned).
--- End quote ---

It's a totally reasonable project a single dedicated coder could probably complete.  I think Evolve died because Stauffer lost interest or had real life intervene, etc.


--- Quote ---How difficult is it to merge completely different projects, such as DB and E 4.0, into a single program?
--- End quote ---

Practically impossible.  You'd be much better off starting from scratch.


--- Quote ---Why doesn't evolve 4.0 run at 1000 cps?
--- End quote ---

How fast did it run?  I never really played with it enough to get a sense of its speed.

I'd have to profile the program to tell you for sure (which, let's be honest, I'm not going to do ) but my guess is that either:

1.  Stauffer wasn't as aggressive with optimizations as he could have been.  
2.  Or that executing KFORTH for each organism was the bottleneck.

#2 seems more likely to me, though it's probably a combination.  See,the problem with optimization is that even if you super optimize the program's bottleneck, it just means another part of the code becomes the bottleneck.  There's always a exponential distribution for time spent executing different functions.  So when you make a certain area of the program 10x faster, it means the program as a whole is maybe 30% faster and the bottleneck has moved to some other completely unrelated function.  If you make that function a million times faster, the program as a whole is now 35% faster.  There's this level of diminishing returns.

There are probably optimizations to be made in the execution of KFORTH programs.  Same for Darwinbots DNA code.  But then you delve in to something like compiler design and writing a JIT compiler, which are not trivial tasks at all.  Just by themselves, they'd be ambitious single programmer tasks.

Likewise with the CA aspects of Evolve, some clever programming could probably make the CA update part of Evolve much faster, but the data structures and algorithms involved can quickly become a whole project in themselves.

But let's assume that you have unlimited programming resources (you're some AAA game studio) and you want to make an ALife sim.  The CA approach and the continuous physics approach have different inherent strengths and weaknesses.  Basically CAs scale with the size of the universe, more or less linearly.  While physics scales with the number of rigid bodies (naively O(n^2), but if you're smart it can also be a linear scaling).  For small sized DB sims, CA would probably be faster.  For a monster sized universe with a few rigid bodies, the physics is much faster (for sparse worlds, I mean).  The CA approach also parallelizes much easier, so it lends itself well to using graphics cards, for instance.

It's very similar to rasterizing vs. ray tracing for graphics actually.

For Darwinbots, I'm specifically avoiding CA related anything.  Even things like substances diffusing through the environment won't be handled by CAs (even though they are admittedly quite common in fluid sims).  Because I wanted to allow for arbitrary sized universes.  But for alife in general, CAs are actually far more common than physically based sims (I'm thinking Framsticks vs. Avida).

jknilinux:
How many bodies does it take for a physics approach to be faster than a CA? Also, are you referring to a very complex pattern in a CS versus few bodies in the large sim? If you don't want to explain it, maybe you could give me a keyword to search on google/wiki? Thanks!

BTW did my explanation of the possible implementation make any sense?

Also, thanks for putting up with all my annoying questions, I swear I'm almost done!

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version