Very nice
I just realized by looking at this that if I want to write a complex neural network bot then using a simple C program to train the fixed outputs of a league bot to get the weights will be the least of my problems. Gonna need a way to autogenerate the code for the network and insert the weights... I haven't looked at Sanger yet, hoping it can do it for me.
And code execution costs would be high if running F1 conditions...
But I'm not going to quit on this, even if Sanger doesn't help and I have to write a program to generate the bot myself.
I realy think tweeking the values in a neural network would be a far more stable way of evolving bot behavior.
Idealy it should use all inputs and outputs (I realize there may not be enough free mem locations for that), and just start off with random fixed weights, then let the weights change by mutation and possibly sexrepro. Then you could always train weights from a bots code in a seperate program, to create a base, you could also have different degees of training for the same bot (I only think you would be able to see how well trained the network is, and not how good it is at generalizing (Not sure that's an english word), but you could just have a fully trained network and some at different training steps).
I realize I can't have all inputs and outputs, would have to narrow it down to essentials...
And I know this isn't something I can just whip up over night, I won't even have time to start on this any time soon...
But I realy think this could help produce faster evo results, since all the different mutations break genes and produce redundant code, and generaly have low chances of doing something usefull, or doing anything at all. I know this isn't unnatural, and I love how mutated code can actualy get better, but by mutating and swapping weights in a neural network the change will usualy have some effect, and it can be softer changes and overall tweeks in behavior. It will still be harmfull mutations most of the time, but I'm hoping it'll be less often and less radical changes.
It just hit me that it wouldn't need to take up more mem locations than the amount of hidden neurons since the weights would be fixed and only change by mutation, much more manageable... maybe I will go for all inputs and outputs then
(I realy hope I can use Sanger for most of this, but I kinda doubt it)