In regards to the pitfalls of using this in a simulation I was more thinking along the lines of potential increases in memory usage depending on the implementation.
At its most fundamental each step would want its own copy of the simulation state to read from and write to for thread safety?
A particular copy of the DNA will want to process against a particular copy of the physical state of the simulation I.e. it would not be the same state the physics is currently solving, nor the same state that is being drawn.
That's not too bad, though. DNA doesn't change much from frame to frame, so that can be cached fairly effectively. And it only interacts with the world through the sysvars, so there's at most maybe 2K per bot that would be duplicated.
What’s the current plan for doing stuff like finding out what a bot has the ability to sense?
I would start off looking at building a KD tree or something and then I can read from that while the original physical state is free to be written to safely in a different thread. But I gather you have some uber cool algorithm for this detection that doesn’t require spatial trees?
No, there's nothing really cool algorithm-wise I've thought of. I want to do something that lets bots see other bots based only on apparent size of the distant bot (and some camouflage maybe). And I want it to be additive, so that a single veggy halfway across the world is invisible, but a whole veggy island becomes very apparent (probably more as a smear of "something green". I think there's a thread somewhere where I talk about rods and cones and the like). Traditional spatial data structures don't necessarily work well for this sort of thing. I'm going to brute force it for the first version then hammer on it some more later with some other sorts of algorithms. Probably look in to raycasting structures since I think it has a lot in common with that. A kd-tree might be the solution at the end of the day.
How is the physical state information going to be stored? Is it directly within a list of world entities (e.g. bots) that have properties such as location and rotation? or within some other separate data structure?
Probably how it will work is that there's a central bot array, and the bots themselves have instances of things like DNA and a physical body. That was the plan in the past, and I think it makes the most sense. Then for physics, there's probably another list of physical bodies inside the physics engine that's more agnostic about what the bodies are (ie: probably contain shots, bots, and static shapes).
Obviously rather than storing actual duplicate copies of each state you could just have a single master and a set of deltas for each cycle I guess?
Yeah, there's something like that in the DNA already.
Introducing a cycle delay between bot DNA execution and bot action would make writing a bot significantly more difficult. A bot would need to attempt to extrapolate the data once cycle forward before acting in order to reliably do the things it should be doing.
Haha, bot programmers always ruin my good ideas I'll probably abandon the idea as a core feature, then. I'm not a fan of "features" that bot programmers can work around, since it effectively works as just another hurdle to jump. Kind of defeats the purpose.
As far as threads, it seems like the main CPU eaters in DB2 were eye input and DNA execution. Both of these tasks seem parallelizable, since doing this for one bot isn't dependent on the results from another.
Definitely. It would be interesting to have access to something like the
Larrabee (full disclosure: Intel owns the company I work for ), or the Cell processor (as in the PS3, though I somehow don't think .NET is going to get ported ). For any of the target platforms I'm aiming for (2-4 core PPUs) it just means it should be able to utilize a significant chunk of the available hardware.