General > Bot Challenges
Challenge #3: Neural Network
Numsgil:
Neat, can't wait to see where you take it.
Elite:
I haven't got a proper optimisation method yet, but I have got a way of retrieving useful neural network configurations, by loading hundreds of randomized bots at a time and letting them fight it out.
Four of the most interesting bots I found are attached. Look at them go!
Some interesting consequences:
- Bots can form 'memories', formed by loops in their archetecture
- This can allow bots a certain degree of individuality depending on their experiences
- Bots have a 'reaction time' during which stimulus' filter from input to output
I think I'm going to let epigenetic evolution (mutations generated between parent and child) handle the connection structuture; The weights and biases of the nodes I think I'd like to have adjusted during a bot's life in accordance with how much pain/pleas it has been getting (so a bot can learn by very dumb trial-and error), and then have the values passed on epigenetically.
Or that's my plan at least. Feel free to play around with my bots and neural network method, experiment, add your own features etc.
Moonfisher:
Very nice
I just realized by looking at this that if I want to write a complex neural network bot then using a simple C program to train the fixed outputs of a league bot to get the weights will be the least of my problems. Gonna need a way to autogenerate the code for the network and insert the weights... I haven't looked at Sanger yet, hoping it can do it for me.
And code execution costs would be high if running F1 conditions...
But I'm not going to quit on this, even if Sanger doesn't help and I have to write a program to generate the bot myself.
I realy think tweeking the values in a neural network would be a far more stable way of evolving bot behavior.
Idealy it should use all inputs and outputs (I realize there may not be enough free mem locations for that), and just start off with random fixed weights, then let the weights change by mutation and possibly sexrepro. Then you could always train weights from a bots code in a seperate program, to create a base, you could also have different degees of training for the same bot (I only think you would be able to see how well trained the network is, and not how good it is at generalizing (Not sure that's an english word), but you could just have a fully trained network and some at different training steps).
I realize I can't have all inputs and outputs, would have to narrow it down to essentials...
And I know this isn't something I can just whip up over night, I won't even have time to start on this any time soon...
But I realy think this could help produce faster evo results, since all the different mutations break genes and produce redundant code, and generaly have low chances of doing something usefull, or doing anything at all. I know this isn't unnatural, and I love how mutated code can actualy get better, but by mutating and swapping weights in a neural network the change will usualy have some effect, and it can be softer changes and overall tweeks in behavior. It will still be harmfull mutations most of the time, but I'm hoping it'll be less often and less radical changes.
It just hit me that it wouldn't need to take up more mem locations than the amount of hidden neurons since the weights would be fixed and only change by mutation, much more manageable... maybe I will go for all inputs and outputs then
(I realy hope I can use Sanger for most of this, but I kinda doubt it)
Numsgil:
Keep us posted. I'd really love to see a successful neural net bot in the leagues.
Moonfisher:
Heh I'm already having problems determining how to convert certain inputs, like *.refxpos ...
And generaly some inputs seem harder for a bot to understand. So I would probably have to use *.refxpos *.refypos angle as an input, and possibly have no input for the raw values. (I can divide the angle by 1220, and if it can exceed I can always use mod, point is I know it's max value and it's easyer for a bot to put to use I think).
But it's a huge job, still not sure exactly how the values should be converted, but probably going to go for something simple and linear, since I get confused trying to write any complex equation in DB...
And there's probably plenty of values I haven't thought of which will be hard or impossible to convert, but I'll try to get as much raw data in it.
Also wondering how decimal values work... can I save 50 100 div into a location and have 0.5 saved ? If not, then can I do this : 50 100 div 100 mult and have 50 again (Will decimals work in the stack if not when saved)? Or would I risk ending up with 0 or 100 ?
Would make things easyer for me if I could just use dcimal values
As far as I can see all values should be converted to a value between -1 and 1, and I would probably start out with randoms in that range. Problem is I'm guessing mutations will change the weights more than intended, so I may have to scale everything to -100 to 100 anyway... still would be nice if it's possible to atleast use decimal values in the stack.
But this is going to take ages, will be a few months till I have time to put some serious effort into it, and then I need to find a way to convert as many inputs as possible, and I may also be underhestimating what I'll need to do with the outputs, then the program to genereate the code with all the weights, and a program to extract training values from a bot, and I'm proably completely overlooking something that will come back to haunt me.
Just hope I can get it done before DB3 comes out
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version