Darwinbots Forum

General => Leagues => Bot Challenges => Topic started by: Elite on September 23, 2007, 05:03:25 PM

Title: Challenge #3: Neural Network
Post by: Elite on September 23, 2007, 05:03:25 PM
I'm going to have another shot at these, but I'm going to do them in a more informal way. There will still be a hall of fame for those authors and their bots that suceed at the challenges. I'm just going to put out a bunch of open challenges out for bot programers to have a go at, and if you manage to program a bot that meets the criteria, post it.

Also, instead of pretty useless challenges like mazerunning and cliff-jumping, I'm going to be setting challenges at the cutting edge of bot programing.

Okay, so ...

CHALLENGE #3

Code a bot that with behaviors that do not come straight from the dna. Specifically, create a bot who's behavior is controled by an artificial neural network. You'll probably need to find out how they work before starting if you don't know already.

Use the following inputs and outputs only:

Input nodes:

*.eye5

*.refeye

Output nodes:

.up (continuous)

.aimdx (continuous)

-1 .shoot store (boolean)

Those are the only sysvars you can use. Don't bother with anything else, reproduction, body management, combat etc. Just the neural network and the above inputs + outputs.

The network must learn by reinforcement learning, using *.pleas averaged over the last 50 cycles as the reinforcement anti-cost. To get the average pleas over 50 cycles, please use this gene:

def avpleas 13

cond
start
*.pleas *.robage 50 mod 991 add store
*991 *992 add *993 add *994 add *995 add *996 add *997 add *998 add *999 add *1000 add *1001 add *1002 add *1003 add *1004 add *1005 add *1006 add *1007 add *1008 add *1009 add *1010 add *1011 add *1012 add *1013 add *1014 add *1015 add *1016 add *1017 add *1018 add *1019 add *1020 add *1021 add *1022 add *1023 add *1024 add *1025 add *1026 add *1027 add *1028 add *1029 add *1030 add *1031 add *1032 add *1033 add *1034 add *1035 add *1036 add *1037 add *1038 add *1039 add *1040 add *1041 add 50 div .avpleas store
stop


(In case you're wondering, no. I didn't do that by hand)
Title: Challenge #3: Neural Network
Post by: Peter on September 24, 2007, 03:54:51 PM
Quote
Code a bot that with behaviors that do not come straight from the dna. Specifically, create a bot who's behavior is controled by an artificial neural network. You'll probably need to find out how they work before starting if you don't know already.

Behavior that does not come straight from the DNA?  ? Controled by an artificial network, I am beginning to feel stupid, I have a slight idea what a neural network is, but can you give a slight example for what the meanig is, I haven't got a really clue.

To want the bot to learn itself something or something. Look here I can get nrg, and the bot has to learn to come and shoot or so. I think I totally miss the point.

Oh wait I missed this
Quote
Also, instead of pretty useless challenges like mazerunning and cliff-jumping, I'm going to be setting challenges at the cutting edge of bot programing.

I don't feel like I am really part of the cutting-edge of bot-programming, but if I understand the idea a little, I just want to see the result when it comes, could be interesting.
Title: Challenge #3: Neural Network
Post by: Numsgil on September 24, 2007, 04:00:03 PM
Neural network (http://csclab.murraystate.edu/bob.pilgrim/445/index.html).
Title: Challenge #3: Neural Network
Post by: Elite on September 24, 2007, 05:02:06 PM
That's definitely going to be useful

What I mean by behavior that doesn't come straight from the dna, and by meta-coding, is behavior that isn't genetically deterministically generated by a list of if ... then triggers. Bots that can be affected by their environment, and find their own strategies, rather than reading off a rigid instruction manual.

(Endy has produced some bots which begin along these lines that use evolutionary algorithms made possible by epigenetics. Reproduction values, thresholds and the like are all free-floating and determined by epigenetic inheritance, with slight mutations each time)

An ANN is one such nongenetic approach. The way I think I'm going to start to go about it is to assign each node (~5 invisible nodes, if that?) to a gene, and reserve a bunch of non-sysvar memory locations for weightings (which can be made epigenetic in later functional versions).

Any advice Num?
Title: Challenge #3: Neural Network
Post by: Numsgil on September 24, 2007, 05:43:28 PM
I thought about implementing a neural net in DNA before.  However, I don't think you should try to code it straight.  Neural nets need alot of memory for the different weightings and such.  Because the free memory locations are scattered throughout the bot's memory structure, trying to hand code everything would lead to alot of insanity.  Instead, I'd program something in a scripting language and "compile" it to working bot DNA.

Neural nets are one of the few instances where 1000 memory locations is actually restrictive   Also, depending on how you look at it, a bot's gene structure is sort of like a complex neural net with non linear weightings.  Each gene takes some inputs, manipulates them, and produces some outputs.

Once upon a time I was playing with the idea of setting up codules in the new DNA system in a network like a neural net.  Each codule would take its input and feed it to another codule, which would either manipulate it some more or feed it to an output codule.  Each codule could have any DNA it wanted, so it could act as a simple additive weighting or a more complex evaluation.

You could also set up a neural net with several bots.  Have each bot act as a node, with either tieval or in/out (or some combination) as the method of communication.  Have most of the bots be blind ("hidden"), with a few designated as inputs (eyes) and a few as outputs (motion and shots).  You could theoretically have the large neuralbot be robust as to the number and connectivity of its bots, and have the connections be determined by some simple genetic code subject to mutation.
Title: Challenge #3: Neural Network
Post by: MacadamiaNuts on September 24, 2007, 08:55:10 PM
Mmmh...

This may be a starting point (ignore the stuff at the begining):

http://www.darwinbots.com/Forum/index.php?showtopic=2176 (http://www.darwinbots.com/Forum/index.php?showtopic=2176)

Instead of building the DNA through random evolution, it could go one gene at once, trying a set of values during a certain amount of time, then store the outcome, try a second set, compare both outcomes and save the good one, then loop until a solution for the first gene is found, then start building the second one.

Once it's done with 4-6 genes, they could keep checking small changes during small amounts of time, if a winner appears, check it again during a longer time, then adopt it. When it meets another "done" bot, they could match their overall outcomes and the loser use the knowledge from the winner.

Basically, I think we can do whatever we want within the current limits, as long as the bot does things one step at once. Heck, I even think we could try and do it with only one store... xD
Title: Challenge #3: Neural Network
Post by: Elite on September 25, 2007, 01:11:05 PM
I've got one! (See attached)

A really simple 2-input, 3-output (one of which is a boolean) 5-invisible-neuron network (or something very loosely resembling something that might be called a neural network  ) with 3 inputs per neuron. All the connectvity data, weightings, biases and node states take up 64 memory slots (902 - 965).

The biggest portion of the code is the declaring of custom variables (I don't know why I even bothered now ).

So far I have no way of actually assigning weights or connections. Its just a demonstrator.
(I'd be mildly surprised if this thing actually works when it does have a system of optimising the weights and connectivity)

Unfortunately, whenever I try and load it into DB the program crashes  

Comments?

-----------------------------------------------------

@ Macadamia

Looks interesting. Can't quite work out what's going on though. I'm guessing it's some sort of heuristic system using 'mutation' of variables that in turn control behavior. Am I close?

971 19 rnd add inc 999 ceil store

^ And what does the 999 ceil do there (or the store for that matter)?

-----------------------------------------------------

(And I've just realised that my .avpleas gene in the original post uses memlocs up to 1041! Doh! )
Title: Challenge #3: Neural Network
Post by: Numsgil on September 25, 2007, 03:59:59 PM
There's probably a limit on custom variables that's never been reached before.  Try limiting it to like 30 or so.  That should let you load it without crashing.
Title: Challenge #3: Neural Network
Post by: Elite on September 25, 2007, 04:37:58 PM
Yep, that does the trick

I've attached the bot with all custom variables removed. It's a bit harder to read, but it loads.

EDIT: And I've also altered the code so that the biases are added, like they are supposed to be, rather than multiplied

A quick tryout with completely randomised neural net settings shows that it also works! Completely scatterbrained, spinning and shooting and moving at random, but reacts to the presence of a something that it can see.
Title: Challenge #3: Neural Network
Post by: EricL on September 25, 2007, 06:17:29 PM
Quote from: Numsgil
There's probably a limit on custom variables that's never been reached before.

FYI, its 50 in current builds.  I'll raise it to 100 in the next drop.
Title: Challenge #3: Neural Network
Post by: Numsgil on September 25, 2007, 11:18:56 PM
Neat, can't wait to see where you take it.
Title: Challenge #3: Neural Network
Post by: Elite on September 26, 2007, 02:58:55 PM
I haven't got a proper optimisation method yet, but I have got a way of retrieving useful neural network configurations, by loading hundreds of randomized bots at a time and letting them fight it out.

Four of the most interesting bots I found are attached. Look at them go!  

Some interesting consequences:
 - Bots can form 'memories', formed by loops in their archetecture
 - This can allow bots a certain degree of individuality depending on their experiences
 - Bots have a 'reaction time' during which stimulus' filter from input to output

I think I'm going to let epigenetic evolution (mutations generated between parent and child) handle the connection structuture; The weights and biases of the nodes I think I'd like to have adjusted during a bot's life in accordance with how much pain/pleas it has been getting (so a bot can learn by very dumb trial-and error), and then have the values passed on epigenetically.

Or that's my plan at least. Feel free to play around with my bots and neural network method, experiment, add your own features etc.
Title: Challenge #3: Neural Network
Post by: Moonfisher on April 07, 2008, 03:36:48 PM
Very nice
I just realized by looking at this that if I want to write a complex neural network bot then using a simple C program to train the fixed outputs of a league bot to get the weights will be the least of my problems. Gonna need a way to autogenerate the code for the network and insert the weights... I haven't looked at Sanger yet, hoping it can do it for me.
And code execution costs would be high if running F1 conditions...

But I'm not going to quit on this, even if Sanger doesn't help and I have to write a program to generate the bot myself.
I realy think tweeking the values in a neural network would be a far more stable way of evolving bot behavior.
Idealy it should use all inputs and outputs (I realize there may not be enough free mem locations for that), and just start off with random fixed weights, then let the weights change by mutation and possibly sexrepro. Then you could always train weights from a bots code in a seperate program, to create a base, you could also have different degees of training for the same bot (I only think you would be able to see how well trained the network is, and not how good it is at generalizing (Not sure that's an english word), but you could just have a fully trained network and some at different training steps).
I realize I can't have all inputs and outputs, would have to narrow it down to essentials...  
And I know this isn't something I can just whip up over night, I won't even have time to start on this any time soon...

But I realy think this could help produce faster evo results, since all the different mutations break genes and produce redundant code, and generaly have low chances of doing something usefull, or doing anything at all. I know this isn't unnatural, and I love how mutated code can actualy get better, but by mutating and swapping weights in a neural network the change will usualy have some effect, and it can be softer changes and overall tweeks in behavior. It will still be harmfull mutations most of the time, but I'm hoping it'll be less often and less radical changes.
It just hit me that it wouldn't need to take up more mem locations than the amount of hidden neurons since the weights would be fixed and only change by mutation, much more manageable... maybe I will go for all inputs and outputs then
(I realy hope I can use Sanger for most of this, but I kinda doubt it)
Title: Challenge #3: Neural Network
Post by: Numsgil on April 07, 2008, 03:54:10 PM
Keep us posted.  I'd really love to see a successful neural net bot in the leagues.
Title: Challenge #3: Neural Network
Post by: Moonfisher on April 07, 2008, 05:04:24 PM
Heh I'm already having problems determining how to convert certain inputs, like *.refxpos ...
And generaly some inputs seem harder for a bot to understand. So I would probably have to use *.refxpos *.refypos angle as an input, and possibly have no input for the raw values. (I can divide the angle by 1220, and if it can exceed I can always use mod, point is I know it's max value and it's easyer for a bot to put to use I think).
But it's a huge job, still not sure exactly how the values should be converted, but probably going to go for something simple and linear, since I get confused trying to write any complex equation in DB...
And there's probably plenty of values I haven't thought of which will be hard or impossible to convert, but I'll try to get as much raw data in it.
Also wondering how decimal values work... can I save 50 100 div into a location and have 0.5 saved ? If not, then can I do this : 50 100 div 100 mult and have 50 again (Will decimals work in the stack if not when saved)? Or would I risk ending up with 0 or 100 ?
Would make things easyer for me if I could just use dcimal values
As far as I can see all values should be converted to a value between -1 and 1, and I would probably start out with randoms in that range. Problem is I'm guessing mutations will change the weights more than intended, so I may have to scale everything to -100 to 100 anyway... still would be nice if it's possible to atleast use decimal values in the stack.

But this is going to take ages, will be a few months till I have time to put some serious effort into it, and then I need to find a way to convert as many inputs as possible, and I may also be underhestimating what I'll need to do with the outputs, then the program to genereate the code with all the weights, and a program to extract training values from a bot, and I'm proably completely overlooking something that will come back to haunt me.
Just hope I can get it done before DB3 comes out
Title: Challenge #3: Neural Network
Post by: Numsgil on April 07, 2008, 05:45:40 PM
The stack is integers, so if you want .5 you'll have to think of clever ways to do it using just integer arithmetic.
Title: Challenge #3: Neural Network
Post by: Moonfisher on April 08, 2008, 08:17:38 AM
Heh damn, that's not going to be easy... will definately limit the range of value I can play with...
For instance if I want to convert the angle it's easy with decimal values : *.myangle 1220 div  (Result 0-1)
If I could use very large values it wouldn't be a problem either : *.myangle 100 mult 122000 div (Result 0-100)
Problem is that 122000 is exceeding the cap... I could possibly use values from -10 to 10, althought hat would mess up accuracy, and it would still force me to cut anything that can exceed 3200...

I think I'll go for the simple solution with a loss of accuracy and low cap... I know I could tecnicaly use the binary opperators available... but this isn't realy the part of the challenge I was planning on spending a lot of time on, so I'll probably go for values from -10 to 10 unless I can find a neat easy example on wiki to make these conversions without hitting a cap or getting any decimal values.
Title: Challenge #3: Neural Network
Post by: Numsgil on April 08, 2008, 01:51:22 PM
I think the cap for integers on the stack is 32 bit math.  I can't remember if that ever made it in to the VB version, or if it was just something I implemented in the C++ and C# versions.
Title: Challenge #3: Neural Network
Post by: Moonfisher on April 09, 2008, 03:17:09 AM
Ohh that would be sweet, that would move the cap up to 4294967296 (2^36)... or half of that I guess...
I was starting to hope I could maybe join up 2 memmory locations and use << and >> to manage them as one, but wasn't even sure if it would be possible to move data from one location to another using those operators (Could imagined they would be seperate locations).
But this would be MUCH better... then I just need to make sure nothing above 32000 is saved in the hidden layer
Gonna have to test this, that would help a lot... using values from -10 to 10 is just not accurate enough...

I'm also thinking of adding a lot of extra inputs, like a conspec input with a number to represent enemies, friends and alge, and stuff like that, to make it easyer on the network...
And the anti viral gene and possibly robage 0 gene should probably be seperate genes. But I'm slowly forming an idea about how to make this happen, a shame I have to waste so much time writing that damn thesis.
Title: Challenge #3: Neural Network
Post by: Numsgil on April 09, 2008, 05:02:22 AM
Give in to your dark impulses, apprentice.  
Title: Challenge #3: Neural Network
Post by: Moonfisher on April 09, 2008, 12:53:26 PM
I'm trying to resist, but the dark side is too tempting
Realy afraid to get too caugh up in this... it's ok to fidle with it a litle in my spare time... but currently it's closer to all my spare time, and I just can't find a way to make myself dedicate more of it to the thesis...
Also just noticed I wrote the value for 2^36 for some reason... was ofcourse suposed to be 2^32...
Anyway going to make a short test bot to check if I can use large values in the stack, and if that works I'll make a micro NN to check if I can actualy make all of this work, and if it does I'll probably try to run a sim with it to see how well it evolves... although having doubts about how well a small hand made network would evolve... might try and make it completely binary and use random values for the weights (Provided I can set up the mutations to work properly with that method, haven't explored that propperly yet.).
I'll post results in here if I get anywhere with it...
I realy hope this can work, I keep thinking of more ideas for it, like training 100 networks from the same bot that do exactly the same but started with different random weights from different seeds before getting trained, creating different evolutionary potential...
Title: Challenge #3: Neural Network
Post by: Moonfisher on April 09, 2008, 03:36:08 PM
Ok, the stack allows the use of values above 32000, which is going to make things a lot easyer
I tryed making a few hand tailored networks :
They're just for testing and don't represent exactly how this will work, it was mostly to see if the scaling of weights worked... Inputs for these are binary... made it easyer to set the weights manualy...
I'm thinking of keeping as much as possible in the stack for as long as possible in between genes.
They run propperly with thin liquids and no wrap... lots of veggies...
Also the outputs aren't transformed propperly, and generaly it's just a quick test to se of the general concept would work...

This one uses one hidden neuron.
[div class=\'codetop\']CODE[div class=\'codemain\' style=\'height:200px;white-space:pre;overflow:auto\']'i1 : *.eye5
'i2 : *.refshoot *.myshoot !=

'o1 : .shoot
'o2 : .up

'Normaly the inputs would have no definitions (For several reasons), this is just for testing
def i1 50
def i2 51
def h1 52

cond
*.robage 0 =
start
0 .shoot store
stop

cond
start
*.eye5 sgn .i1 store
stop

cond
*.refshoot *.myshoot !=
start
1 .i2 store
stop

cond
start
*.i1 100 mult -50 mult 100 div .h1 store
stop

cond
start
*.i2 100 mult -50 mult 100 div *.h1 add .h1 store
stop

cond
start
*.h1 100 mult 100 div 100 div .shoot store
stop

cond
start
*.h1 -300 mult 100 div 100 div .up store
stop

cond
start
0 .i2 store
stop

end

And this one uses 2... second one does virtualy nothing though, but the idea with this one is to set some random weights and see if something usefull can evolve. It doesn't reproduce, so I'm just throwing in lots of bots with frequent point mutation to see what happens...
If you use the current values it's more likely to devolve, since the best mutations are just some increase in speed once it gets big and slow...

[div class=\'codetop\']CODE[div class=\'codemain\' style=\'height:200px;white-space:pre;overflow:auto\']'i1 : *.eye5 sgn
'i2 : *.refshoot *.myshoot !=

'o1 : .shoot
'o2 : .up

'Normaly the inputs would have no definitions (For several reasons), this is just for testing
def i1 50
def i2 51
def h1 52
def h2 53

cond
*.robage 0 =
start
0 .shoot store
stop

cond
start
'*.eye5 100 ceil 100 mult 100 div .i1 store
*.eye5 sgn .i1 store
stop

cond
*.refshoot *.myshoot !=
start
1 .i2 store
stop

'Inputs
'h1
cond
start
*.i1 100 mult -50 mult 100 div .h1 store
stop

cond
start
*.i2 100 mult -50 mult 100 div *.h1 add .h1 store
stop

'h2
cond
start
*.i1 100 mult 50 mult 100 div .h2 store
stop

cond
start
*.i2 100 mult 50 mult 100 div *.h2 add .h2 store
stop

'Outputs
'o1
cond
start
*.h1 100 mult *.h2 0 mult add 100 div 100 div .shoot store
stop

'o2
cond
start
*.h1 -300 mult *.h2 100 mult add 100 div 100 div .up store
stop

cond
start
0 .i2 store
stop

end

So far I've only run a small quick test, funny enough 2 of the bots broke the conspec, but one had broken the gene generating the input, and the other one had broken the input gene for h1 making us see an enemy all the time... as far as weight tweeking goes nothing very interesting had time to happen, and I noticed the actual weights where rarely the ones to get mutated. This is also why I'll try to keep a lot in the stack, and have as litle code as possible that isn't related to the network and it's weights... also split everything up into more genes, both for sexrepro and to increase the odds that a broken gene will just act as a dead dendrit or neuron...
If I manage to evolve something interesting from the one with random values I'll post it... but the odds are low since it doesn't reproduce and I'm generaly impatient and don't have much faith in the potential of a bot that size... it's just too smal and fragile, it needs a big network so it can realy mess it up, and generaly so mos of the mutations would affect the actual network.
I'm thinking it might be a good ide to make the program able to clean up an evolved bot, restoring broken genes outside the network, and replacing broken dendrit genes with a fresh one with 0 as a weight. But I'm also thinking that may not be possible at all, since the mutations breaking certain genes can have too many different effects, I may have to fix it by hand, if that's even possible...
My concern is mostly that the changes should realy be focused around the weights, and mutations to other genes and killing dendrits would usualy have more radical effects and therfor become rather frequent... and dendrits and broken genes won't naturaly form (Or atleast it's very unlikely)... so it may end up locking the bot in a certain direction... I think if something evolves to be worth while I could try to fix it up a litle by hand... but best try not to have too much code ouside the network...

Also I ran one test on the random bot, one bot was alive when I got back, it had broken te output gene for shoot, just replaced store with dec... was shooting all the time and moved towards enemies... was lucky to have had a few alge in sight and survived a litle longer... but not quite what I was hoping for
Not sur why it was moving forward, I'm guessing the values in the stack from the first broken output gene affected the second output gene...
This does support the theory that keeping values in the stack and splitting everything into many genes may be benefitial... still trying to figure out if theres a realy clever way to do it... possibly one allowing for new dendrits to form or merge (Or atleast increase the odds).
I can see I definately need to test more ideas before I start on the C code... also I think using C may be a stupid way of doing it... but I don't know python and pearl and generaly never realy used those syntax based string manipulating fuction things.... but maybe learning it will be easyer than trying to build it in C
I think I'll just try to find a C library with those functions... if it looks like they would be very usefull...
Title: Challenge #3: Neural Network
Post by: Numsgil on April 09, 2008, 03:52:22 PM
Instead of mutating the whole bot, I would do a bit of random oscillation on the weights every time a bot is born.  Just check if it's robage is 0 and if so, add or subtract a little bit from weights at random.  That way you can preserve the structure (which is extremely complex) and just mutate the weights.
Title: Challenge #3: Neural Network
Post by: abyaly on April 10, 2008, 02:17:15 AM
What is your thesis on?
Title: Challenge #3: Neural Network
Post by: Moonfisher on April 10, 2008, 03:10:02 AM
Yeah handeling mutations manualy would be more stable, but it would also set a strickt limit to the size of the network (Since I can only inherit 15 weights).
So right now I'm toying with the idea of keeping as much as possible in the stack at all times, like 15 hidden nodes at the time, then empty them into vars to make room for new values...
I'm hoping this would cause broken dentrits to often affect new neurons... but not sure if hat would be better or worse
Anyway if the structure completely falls appart I think it should be ok, if I want to restore some potential in an evolved network I'll just have to add a second network to handle whatever has gotten lost.
Anyway it's unnatural for a neural network to have dendrits going from evey input to every neuron asf... so if something breaks og merges it would only be natural...
Anyway made a version using some tranformation for the input and output values, but I think I scaled everything wrong, lost track along the way
I also think maybe splitting it up may have a harmmfull effect, gotta find a balance between a good setup for sexrepro and least harmfull mutations that disable genes.
And as far as I can see a large part of the network will always be the part transforming input and output values, so those would often get mutated... but then again I think ti would correspond to just another broken or merged dendrir... I'm mostly concerned that in average for one mutation of a weight 1 or 2 dendrits would get broken, so a bot needs to be rather lucky to achieve some weight tweeking without also having lost some dendrits. I'm thinking of maybe using the epigenetic locations to save some weights that have a more dramatic effect, to cause some more frequent tweeking of those values...


[div class=\'codetop\']CODE[div class=\'codemain\' style=\'height:200px;white-space:pre;overflow:auto\']
def h1 52
def h2 53

cond
*.robage 0 =
start
0 .shoot store
stop

'Inputs
'h1
start
*.eye5
stop

start
40 mult
stop

start
.h1 store
stop

cond
start
100
stop

cond
*.refshoot *.myshoot =
start
0 mult
stop

start
40 mult
stop

start
*.h1 add
stop

start
.h1 store
stop

'h2
start
*.eye5
stop

start
-10 mult
stop

start
.h2 store
stop

cond
start
100
stop

cond
*.refshoot *.myshoot =
start
0 mult
stop

start
100 mult
stop

start
*.h2 add
stop

start
.h2 store
stop

'------------------------ Outputs
'--- o1
start
*.h1 100 mult
stop

start
*.h2 0 mult
stop

start
add
stop

start
100 div
stop

start
100 div
stop

start
sgn -
stop

start
.shoot store
stop

'--- o2
start
*.h1 1 mult
stop

start
*.h2 1 mult
stop

start
add
stop

start
100 div
stop

start
*.maxvel mult
stop

start
100 div
stop

start
.up store
stop

end


Also about the thesis, it was initialy suposed to be an experiment to use neural networks in a game AI, but realized I didn't have the inputs I needed, so now it's a system for mining and baking data for the AI  Will have to work on the AI afterwards
(Also I wasn't sure if it was realy worth putting that much effort into it, if you spend that much time on a feature you want to be sure it's something people will notice and eppretiate... Damn customers... the data mining feature can also be used for a lot of other stuff)

And while I'm typing anyway, I think I hit a strange bug with this thing, I was running veggies that gained like 40 nrg per kilo, so they got realy big and gained a lot of energy all the time. It then seemed like once killed they didn't always disapear (Like they never got removed from the bucket), and since the bot only moves forward it pretty much got stuck everytime it was looking at a non existing veggy. I'll make a propper bug report later today if I have the time.
Title: Challenge #3: Neural Network
Post by: Moonfisher on April 11, 2008, 04:16:41 PM
Ok... this is another hand made neural network... kan aim, shoot, move forward and reproduce...
Tryed to adjust the values manualy, so it's pretty stiff for a neural network...
And it's still too small... and I realy don't think it'll evolve well either...
Added a few normal genes at the end to get sexrepro in there, and some body regulation to help survival, it can manage without it, but it helps a litle...

I'm realy hoping to be able to push more or less raw values all the way through the network, like using the raw angle value to set the aim... but it might be better to keep everything binary, so either something happens or it doesn't... but that would basicaly mean a trained bot ould still have all it's original actions but have it's conditions trained into the network... this would realy limit the possibilities.
I was hoping a larger network might evolve some adjustment to aim using the refvelup and refveldx inputs... stuff like that...
I guess I'd have to set it up to be possible to coustomize the transformation of inputs and outputs anyway... not sure what the best way to handle this is.
Also figured I could have used mod to store decimal values, but it was a lot easier just to scale everything...

Not sure why I split up some of it and not the rest... I know it looks wierd... got confused...

[div class=\'codetop\']CODE[div class=\'codemain\' style=\'height:200px;white-space:pre;overflow:auto\']
'Another neural network test
'Just trying to figure out whats possible

'i1 : *.eye5
'i2 : *.refshoot *.myshoot !=
'i3 : *.refxpos *.refypos angle
'i4 : *.body
'-------
'o1 : .shoot
'o2 : .up
'o3 : .setaim
'o4 : .repro

def maxangle 1364
def maxeye 100
def maxbody 2000
def bias 1

def wi1h1 10
def wi2h1 200
def wi3h1 0
def wi4h1 0

def wi1h2 -10
def wi2h2 400
def wi3h2 0
def wi4h2 0

def wi1h3 0
def wi2h3 150
def wi3h3 0
def wi4h3 0

def wi1h4 0
def wi2h4 0
def wi3h4 500
def wi4h4 0

def wi1h5 0
def wi2h5 0
def wi3h5 0
def wi4h5 500

def wh1o1 500
def wh1o2 -200
def wh1o3 0
def wh1o4 0

def wh2o1 0
def wh2o2 50
def wh2o3 0
def wh2o4 0

def wh3o1 0
def wh3o2 300
def wh3o3 -100
def wh3o4 0

def wh4o1 0
def wh4o2 0
def wh4o3 500
def wh4o4 0

def wh5o1 0
def wh5o2 0
def wh5o3 0
def wh5o4 500


def h1 51
def h2 52
def h3 53
def h4 54
def h5 55



cond
*.robage 0 =
start
0 .shoot store
stop

start
.bias .h1 store
.bias .h2 store
.bias .h3 store
.bias .h4 store
.bias .h5 store
stop



'********** Inputs
'======= h1
'--- i1
start
 *.eye5
stop

start
 200 mult .maxeye div 100 sub
stop

start
 .wi1h1 mult
stop

start
 100 div
stop

start
 *.h1 add .h1 store
stop


'--- i2
start
 *.refshoot *.myshoot sub abs sgn 200 mult 100 sub
stop

start
 .wi2h1 mult
stop

start
 100 div
stop

start
 *.h1 add .h1 store
stop


'--- i3
start
 *.refxpos *.refypos angle .maxangle mod
stop

start
 200 mult .maxangle div 100 sub
stop

start
 .wi3h1 mult
stop

start
 100 div
stop

start
 *.h1 add .h1 store
stop


'--- i4
start
 *.body
stop

start
 200 mult .maxbody div 100 sub
stop

start
 .wi4h1 mult
stop

start
 100 div
stop

start
 *.h1 add .h1 store
stop

start
 *.h1 5 div .h1 store
stop



'======= h2
'--- i1
start
 *.eye5

 200 mult .maxeye div 100 sub

 .wi1h2 mult

 100 div

 *.h2 add .h2 store
stop


'--- i2
start
 *.refshoot *.myshoot sub abs sgn 200 mult 100 sub

 .wi2h2 mult

 100 div

 *.h2 add .h2 store
stop
 

'--- i3
start
 *.refxpos *.refypos angle .maxangle mod

 200 mult .maxangle div 100 sub

 .wi3h2 mult

 100 div

 *.h2 add .h2 store
stop


'--- i4
start
 *.body

 200 mult .maxbody div 100 sub

 .wi4h2 mult

 100 div

 *.h2 add .h2 store
stop

start
 *.h2 5 div .h2 store
stop



'======= h3
'--- i1
start
 *.eye5

 200 mult .maxeye div 100 sub

 .wi1h3 mult

 100 div

 *.h3 add .h3 store
stop


'--- i2
start
 *.refshoot *.myshoot sub abs sgn 200 mult 100 sub

 .wi2h3 mult

 100 div

 *.h3 add .h3 store
stop


'--- i3
start
 *.refxpos *.refypos angle .maxangle mod

 200 mult .maxangle div 100 sub

 .wi3h3 mult

 100 div

 *.h3 add .h3 store
stop


'--- i4
start
 *.body

 200 mult .maxbody div 100 sub

 .wi4h3 mult

 100 div

 *.h3 add .h3 store
stop

start
 *.h3 5 div .h3 store
stop


'======= h4
'--- i1
start
 *.eye5

 200 mult .maxeye div 100 sub

 .wi1h4 mult

 100 div

 *.h4 add .h4 store
stop


'--- i2
start
 *.refshoot *.myshoot sub abs sgn 200 mult 100 sub

 .wi2h4 mult

 100 div

 *.h4 add .h4 store
stop


'--- i3
start
 *.refxpos *.refypos angle .maxangle mod

 200 mult .maxangle div 100 sub

 .wi3h4 mult

 100 div

 *.h4 add .h4 store
stop


'--- i4
start
 *.body

 200 mult .maxbody div 100 sub

 .wi4h4 mult

 100 div

 *.h4 add .h4 store
stop

start
 *.h4 5 div .h4 store
stop


'======= h5
'--- i1
start
 *.eye5

 200 mult .maxeye div 100 sub

 .wi1h5 mult

 100 div

 *.h5 add .h5 store
stop


'--- i2
start
 *.refshoot *.myshoot sub abs sgn 200 mult 100 sub

 .wi2h5 mult

 100 div

 *.h5 add .h5 store
stop


'--- i3
start
 *.refxpos *.refypos angle .maxangle mod

 200 mult .maxangle div 100 sub

 .wi3h5 mult

 100 div

 *.h5 add .h5 store
stop


'--- i4
start
 *.body

 200 mult .maxbody div 100 sub

 .wi4h5 mult

 100 div

 *.h5 add .h5 store
stop

start
 *.h5 5 div .h5 store
stop


'in
'-----------------------------------------------------------------
'out


'********** Outputs
'======= o1
start
 *.h1

 .wh1o1 mult

 *.h2

 .wh2o1 mult

 add

 *.h3

 .wh3o1 mult

 add

 *.h4

 .wh4o1 mult

 add

 *.h5

 .wh5o1 mult

 add

 100 div

 5 div

 sgn 1 mult - 0 ceil

.shoot store
stop


'======= o2
start
 *.h1
stop

start
 .wh1o2 mult
stop

start
 *.h2
stop

start
 .wh2o2 mult
stop

start
 add
stop

start
 *.h3
stop

start
 .wh3o2 mult
stop

start
 add
stop

start
 *.h4
stop

start
 .wh4o2 mult
stop

start
 add
stop

start
 *.h5
stop

start
 .wh5o2 mult
stop

start
 add
stop

start
 100 div
stop

start
 5 div
stop

start
 *.maxvel mult 100 div
stop

start
 .up store
stop



'======= o3
start
 *.h1
stop

start
 .wh1o3 mult
stop

start
 *.h2
stop

start
 .wh2o3 mult
stop

start
 add
stop

start
 *.h3
stop

start
 .wh3o3 mult
stop

start
 add
stop

start
 *.h4
stop

start
 .wh4o3 mult
stop

start
 add
stop

start
 *.h5
stop

start
 .wh5o3 mult
stop

start
 add
stop

start
 100 div
stop

start
 5 div
stop

start
 100 add 2 div .maxangle mult 100 div
stop

start
 .setaim store
stop




'======= o4
start
 *.h1
stop

start
 .wh1o4 mult
stop

start
 *.h2
stop

start
 .wh2o4 mult
stop

start
 add
stop

start
 *.h3
stop

start
 .wh3o4 mult
stop

start
 add
stop

start
 *.h4
stop

start
 .wh4o4 mult
stop

start
 add
stop

start
 *.h5
stop

start
 .wh5o4 mult
stop

start
 add
stop

start
 100 div
stop

start
 5 div
stop

start
 sgn 50 mult 0 floor
stop

start
 .repro store
stop


'----- some regular genes, it does fine without them, but they help a litle. (Not going to increase this network...)

cond
*.nrg *.body 3 mult >
start
100 .strbody store
stop

cond
*.fertilized 5 >
*.nrg 1000 >
*.body 200 >
*.robage 50 >
start
40 .sexrepro store
stop

cond
*.robage 300 >
*.nrg 1000 >
*.body 500 >
*.kills 5 >
*.eye5 0 >
*.refshoot *.myshoot =
*.robage 350 mod 300 >
start
-8 .shoot store
*.refxpos *.refypos angle .setaim store
*.refvelup 5 add .up store
stop

end

Title: Challenge #3: Neural Network
Post by: Moonfisher on April 12, 2008, 07:44:31 AM
The NN posted above isn't that good, did some tweeking and stuff to try and devolve it in a good way...
But with the transformation of inputs and outputs the whole structure gets more complex...

I've been trying to figure out how to keep everything in the stack all the time, and the only ways I can think of involve calculating the hiden nodes several times (All nodes for each output), and the point here is to get faster evo sims, and the network is already slowing things down by being big...
So I definately need to use vars, but I'm thinking of skipping the first hidden layer, and save the value for the output into a var and then transfer it into the output at the very end.
This way I would only need one var for every output, and if I cut out transformation and just use raw values I could actualy limit the whole thing to follow this method : (i1 and i2 would just be raw inputs)
*.i1 .wi1h1 mult *.i2 .wi2h1 mult add *.o1 add .o1 store
and so forth, pushing the bias directly into the output vars.
This way it's more likely that a mutation would change something relevant...
It would still need to be scaled up and down, but I figure this could be done by having realy large weights and the scale everything down before it's saved in the output var. (I think I remember seeing an ^ operator, so it shoould be possible to scale everything down without adding too much.
My concern is that raw values multiplied with a weight and added together are likely to exceed the cap of 32000 even when scaled down.... so I'm not sure what kind of effect that would have. I was thinking of maybe capping the values, so exceeding a cap will keep the max value instead of messing up the input....
But then I'd be adding more stuff again... and who knows... maybe a network could actualy exploit it in a good way...

My biggest problem so far with raw values is that -1 in shoot is an attack, positive values are information... -6 is also an attack, -3 is poison, also an attack... but -2... shoots ebergy... which is fine with the normal settup, but with weights regulating the outputs... I'm afraid bots wouldn't be able to combine several shot types this way without constantly firing energy shots... so I may have to convert certain outputs to be binary, we either shoot -1 or -6 or -3 or -4... not as dynamic... but probably a lot more stable.

I'm still not happy though, if I put a big network together it may have better odds at achieving something usefull with fewer mutations, and possibly be able to evolve from a better base with more stable evolution... but even if this is true, the size of the code would reduce it's speed, so even if it realy does work bettter than random mutations, it's not even sure it'll be any faster... the advantage is the way nothing is set in stone in the network, every action is triggered by a mix of inputs, so theoreticaly it should be a better evo base... downside is the size of the network grows exponentialy for every input, output and hidden node you add, and to have a propper NN evo base you need a large network...
Basicaly saying I'm falling out of love with this idea lately... unless I can see some progress from the test network at some point then I may drop this idea, atleast for a while...
Title: Challenge #3: Neural Network
Post by: Moonfisher on April 15, 2008, 04:14:52 PM
I tryed running a sim with the last network I posted (NNOne.txt)
Only with point mutations but a higher probability...
Was running size 2 with thin liquids and normal F1 setting (F1 costs, toroidal borders, no bodies asf), except the veggy cap was 500 and repoped at 60 by 70 veggies...
After about 150 mutations it started to form "fans" and sweep up the veggies. (Probably due to the abundance of food)
It seems like it's only aiming in one dirrection forms fans and shoots...
It's not realy doing much with the network though.
I guess the hand made weights are also making it too stiff to realy evolve propperly.
Although, before "devolving" it maintained an average of 50-150 bots, in the end it was averaging above 1000, so it's not like it didn't get better, it just didn't put the network to much use.

I'm still considering making it binary, using mostly raw inputs and custom output actions, then training all the conditions of a handmade bot into the weights.
This way mutations could slowly break down and tweek conditions without doing too much damage.
But it still limits the possibilities, so I'd realy like to have more raw outputs...

Since the network didn't entirely flop, and actualy managed to get better in a shorter amount of time, I made a new network with no transformation, using only raw inputs...
Made som hand made values again to get it started, so it's still stiff, but I only set weights for the first 4 inputs and outputs then expanded it with a lot of empty network... it also has a different bias for every neuron since evolution seems to have that effect anyway.
I'm hoping to see some kind of propper use of the network... any kind at all... if I can get it with a network with handmade weights, a propperly trained network should have even better chances.
I ran the new network for a short while, and it seemed to trigger shell rather fast, but I'm pretty sure it just got an inc somewhere or something like that...

I added some bots and a sim, the sim is the last save from the last bot (NNOne.txt).
Theres also the base bot used and 2 of the last evolved bots.
And also the new networks, the first one using only 4 inputs and outputs and the second one identical but with a larger empty network.
Title: Challenge #3: Neural Network
Post by: d-EVO on November 21, 2008, 06:32:03 PM
Im gona give this a shot.

just grappling with this neural network concept so I made this very simple bot. just to start

can it be classified as a neural network

and if eye5 and refeye are the only imputs you can use why are people entering bots that use other eyes?

Title: Challenge #3: Neural Network
Post by: Moonfisher on November 22, 2008, 06:47:14 AM
Well I guess you could call it a handauthored binary neural network, in the sence that is stores values in hidden neurons.
And I would say using condition logic as input is ok, just as using your output in a conditions doesn't violate any "rule"...
But I think using condition logic in the middle of the network kind of makes the network redundant... then you could just have put the input straight into the final conditions without storing anything anywhere.

Usualy any input would be sendt to all the hidden neurons in the first layer (This is however not a rule, it just leaves more options for the network to adjust when using backpropagation og random mutations).
You would also normaly multiply the input with a value (A weight) wich would be specific for every conection. So every time a value travels through a dendrit it will be modified according to the "thickness" of the denrit...
Then you add up all the inputs for a neuron and move on to the next one

So basicaly what I'm saying is that you could consider the structure of your bot as a neural network, but you're not realy getting any of the advantages of a neural network from this.
Normaly you would use the weights to turn the input negative for some neurons in order to make sure the right ones fire at the right time.
This is however pretty hard to adjust manualy... I was toying with the idea of making a small script that would generate most possible inputs and outputs from a regular bot, and then use those values to train a network usnig BP... but atm my spare time is going to the DB mod for forming networks structures instead of random mutations... think that project is more interesting in the long run.

And you can use any eye you want, but all the refvars (like refeye) will be coming from your focus eye, and as default your focus eye is eye nr 5, but you can change this.
You can also just use eyef wich will always be the eye you're using to focus with. The other eyes will still show if they see something, but you need to change focuseye, or turn towards them to know if it's a friend or foe... and you can't change focuseye mid cycle, if you switch focuseye then it will have an efect next cycle. This is also why some fancy eye systems end up letting you turn back and forth between 2 friends... and that's also the reason why the eyes in spinner offset all the eyes to the left and uses eye9 as a focuseye so it's always turning in the same direction and doesn't get stuck turning back and forth. turning is free in F1 so spinning around yourself has no costs and it's an effective way to locate any incoming enemies fast.

The sysvars eye1-eye9 will just show if they see anything and how far away it is, while eyef shows the same thing but instead of being a specific eye it's the value from the eye you've selected as your focus eye. So if you didn't change your focus eye then eye5 and eyef will be the same, but if you change your focus eye a lot, then eyef can make it easier for you to write the code you need.
If you see something with your focus eye then all the refvars will be filled out with usefull information, like refeye and refshoot which tell you how many eye and shoot commands the target has in it's DNA, so you can compare with your own DNA and see if it's one of your own kind. You can also see wich way the target is moving and how fast and use those values to match the oponents movements. You also get the oponents coordinates, making it easy to find the angle and set your aim to point exactly at the oponent. (You will get the coordinates for the next cycle, wich means you don't need to worry about where the guy is moving to). It's very easy to track an oponent :
*.refvelup .up store 'Match the oponents forward movement
*.refvelsx .sx store 'Match the oponents sideways movement
*.refxpos *.refypos angle .setaim store 'Aim at the next location where your oponent will be

With this code the only way for the target to get away is by crossing a toroidal border or if something blocks your vision, or maybe if something rams into you and pushes you away.
You may also want to have a higher .up value than your oponent or you'll never get any closer.
So maybe something like :
*.refvelup .up store
*.refvelsx .sx store
*.refxpos *.refypos angle .setaim store
*.eyef 30 >
*.maxvel .up store

This is all you realy need to track an oponent. It's my impression that it wasn't always this easy, but now it is, so no reason to make it complicated.

Also if you don't want to get sidetracked with strange eye systems, then just use a wider eye5. Like 300 .eye5width store will make you eye5 about as wide as all your regular eyes together (But the range will also be shorter)
Title: Challenge #3: Neural Network
Post by: d-EVO on November 22, 2008, 07:21:03 PM
Thanks for the post moonfisher.

knew most of this stuff already.
to check out the extent of my knowledge look at my latest bot (BETA-T)
If it sees a friend who is tracking an enemy. he will aim in the exact direction of that enemy. A great teamwork bot.
And an UNFOOLABLE recognition system !!! please tell me what you think

Thanks for the nueral network info.
the reson my bot was so simple was because I didnt know how much you could posibly do wth just refeye and eye5.
Title: Challenge #3: Neural Network
Post by: Moonfisher on November 23, 2008, 01:07:47 PM
AH ok, so the NN chalenge only allowed eye5 and refeye... didn't notice that, thought you where in doubt about what they did, my bad.

And the conspec you're using is more or less the same as what bacillus proposed in the conspec challenge topic.
Basicaly the only way I know of to fool in/out conspecs is parroting what the other person is saying, so if you reqire one input to be different from what you're saying then you prevent this. In the other input you could have a set key or a relation as the one you're using... both should work.
The only downside is spending 2 stores a cycle on sending the values.
You can also use .dnalen in .memloc and compare *.memval to *.dnalen, but then friends affected by viruses will turn hostile.
Spinner actualy uses only one output, and uses totalmyspecies and some different keys...
But looking at what you're doing then you could actualy put it all into one output and it would be a lot simpler than what spinner is soing :

(Warning, untested code, think it works though)
The output would be :
100 rnd dup 10 add 128 mult | .out10 store

And to check for friends :
*.in10 *.out10 !=
*.in10 127 & *.in10 16256 & 10 sub = and

Basicaly 2 values between 1 and 100 will fit in the space of 32000... so they can be crammed into one output saving a litle energy...


However having a foolproof conspec won't prevent you from looking back and forth between 2 friends... this has nothing to do with the conpec.
You see something in eye8, so you turn and find out it's a friend, but now you see something in eye9 so you turn and see another friend... but now you see the last guy you saw in eye1 so you turn back to find out what it is and find the first friend again, then you see friend nr 2 in eye9 again, asf asf...
Ofcourse if you have a lot of movement this shouldn't be a big problem, but it is definately a flaw you can find in most eye systems... if someone uses them in a slow moving bot or one that is static half the time it will spend a lot of time looking back and forth in endless loops. Hence all the blah blah blah about spimmer and it's eyes...
Title: Challenge #3: Neural Network
Post by: d-EVO on November 23, 2008, 04:05:27 PM
I actualy al ready new exactly what eye5 and refeye did ( I have baisicaly memorised all the sysvars on wiki).

my bot doesnt use that kind of aiming system so it never has that problem.
my system cant be fooled by mimmicry, it checks to see if the refbot has the same out as my bot. if it does it knows it is not friendly because there is only a 1 in 100 chance that a friend can have the same out puts.

The only thing I wasn't completly sure of was how to implement a neural network into a bot.