12
« on: March 03, 2022, 12:10:58 AM »
This gets in to areas of actual biology research so smart people have had different opinions on the matter over the centuries, but personally I ascribe to a kind of haystack model. That's often discussed in terms of the problem of altruism but I think it works for the evolution of complex behavior generally.
The idea goes that an isolated population tends to become genetically homogenous over time. Sometimes beneficial mutations develop, sometimes harmful ones, sometimes lots of neutral mutations, but generally things have a tendency to spread out and either become universal or extinct. It's possible for such a population to even drive itself to extinction by accidentally getting into evolutionary dead ends.
Now imagine having dozens or hundreds of such populations, all from a common ancestor, and having them isolated long enough that they're all a bit different. Now suddenly mix them together into a single population. Each line has to fight for survival and a line with a beneficial mutation has an edge. Wait for the larger population to become homegenous again, then split the population back up into the original enclaves. Now repeat this on different time and size scales with different groups in a chaotic soup of isolating and recombining populations. You end up with a red queen's race of competing genetic lines fighting against each other. The lines that do this well survive, the ones that don't, don't. In sexual and horizontal gene transfer organisms the different lines can even create hybrids during the mixing process so you can have mutations that are neutral or even slighytly negative independently find each other and produce a beneficial result.
That's the theory. If you wanted to mimic something like this in Darwinbots you would run a simulation for a while until there are some number of mutations in the population and things seem to have stagnated. Then find an examplar bot and create a new simulation with clones of that bot and the original line. Run that for a while, find an examplar bot, then create a new simulation with the original, the 1st round examplar and this new examplar. Repeat ad nauseum, choosing which examplars go together in a sim arbitrarily/randomly. You end up with strong selective pressure without the possibility of a mutation meltdown. This is an awful lot like what DeepMind did for their Starcraft2 research: they built a ladder and pitted new versions of the AI against old version of the AI periodically to protect against "catastrophic forgetting". The ones that did well had to do well against not just their peers but previous iterations of themselves. The difference in biology is that there's no external fitness function outside of keeping your line represented in the future somehow.
It's worth noting as a human you can put your thumb on the scale and select for things you find interesting, but you don't have to and it'd be perfectly fine to choose your examplars randomly.
Also worth noting, in Darwinbots it's hard for DNA to get longer. There's some mutations set up for it but practically speaking it doesn't happen very often. You can help things along a bit if you add a bunch of 0s to your bots' DNA to give it a bit more room to develop mutations in.