Author Topic: Challenge #3: Neural Network  (Read 20510 times)

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Challenge #3: Neural Network
« Reply #15 on: April 07, 2008, 05:45:40 PM »
The stack is integers, so if you want .5 you'll have to think of clever ways to do it using just integer arithmetic.

Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Challenge #3: Neural Network
« Reply #16 on: April 08, 2008, 08:17:38 AM »
Heh damn, that's not going to be easy... will definately limit the range of value I can play with...
For instance if I want to convert the angle it's easy with decimal values : *.myangle 1220 div  (Result 0-1)
If I could use very large values it wouldn't be a problem either : *.myangle 100 mult 122000 div (Result 0-100)
Problem is that 122000 is exceeding the cap... I could possibly use values from -10 to 10, althought hat would mess up accuracy, and it would still force me to cut anything that can exceed 3200...

I think I'll go for the simple solution with a loss of accuracy and low cap... I know I could tecnicaly use the binary opperators available... but this isn't realy the part of the challenge I was planning on spending a lot of time on, so I'll probably go for values from -10 to 10 unless I can find a neat easy example on wiki to make these conversions without hitting a cap or getting any decimal values.

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Challenge #3: Neural Network
« Reply #17 on: April 08, 2008, 01:51:22 PM »
I think the cap for integers on the stack is 32 bit math.  I can't remember if that ever made it in to the VB version, or if it was just something I implemented in the C++ and C# versions.

Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Challenge #3: Neural Network
« Reply #18 on: April 09, 2008, 03:17:09 AM »
Ohh that would be sweet, that would move the cap up to 4294967296 (2^36)... or half of that I guess...
I was starting to hope I could maybe join up 2 memmory locations and use << and >> to manage them as one, but wasn't even sure if it would be possible to move data from one location to another using those operators (Could imagined they would be seperate locations).
But this would be MUCH better... then I just need to make sure nothing above 32000 is saved in the hidden layer
Gonna have to test this, that would help a lot... using values from -10 to 10 is just not accurate enough...

I'm also thinking of adding a lot of extra inputs, like a conspec input with a number to represent enemies, friends and alge, and stuff like that, to make it easyer on the network...
And the anti viral gene and possibly robage 0 gene should probably be seperate genes. But I'm slowly forming an idea about how to make this happen, a shame I have to waste so much time writing that damn thesis.

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Challenge #3: Neural Network
« Reply #19 on: April 09, 2008, 05:02:22 AM »
Give in to your dark impulses, apprentice.  

Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Challenge #3: Neural Network
« Reply #20 on: April 09, 2008, 12:53:26 PM »
I'm trying to resist, but the dark side is too tempting
Realy afraid to get too caugh up in this... it's ok to fidle with it a litle in my spare time... but currently it's closer to all my spare time, and I just can't find a way to make myself dedicate more of it to the thesis...
Also just noticed I wrote the value for 2^36 for some reason... was ofcourse suposed to be 2^32...
Anyway going to make a short test bot to check if I can use large values in the stack, and if that works I'll make a micro NN to check if I can actualy make all of this work, and if it does I'll probably try to run a sim with it to see how well it evolves... although having doubts about how well a small hand made network would evolve... might try and make it completely binary and use random values for the weights (Provided I can set up the mutations to work properly with that method, haven't explored that propperly yet.).
I'll post results in here if I get anywhere with it...
I realy hope this can work, I keep thinking of more ideas for it, like training 100 networks from the same bot that do exactly the same but started with different random weights from different seeds before getting trained, creating different evolutionary potential...

Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Challenge #3: Neural Network
« Reply #21 on: April 09, 2008, 03:36:08 PM »
Ok, the stack allows the use of values above 32000, which is going to make things a lot easyer
I tryed making a few hand tailored networks :
They're just for testing and don't represent exactly how this will work, it was mostly to see if the scaling of weights worked... Inputs for these are binary... made it easyer to set the weights manualy...
I'm thinking of keeping as much as possible in the stack for as long as possible in between genes.
They run propperly with thin liquids and no wrap... lots of veggies...
Also the outputs aren't transformed propperly, and generaly it's just a quick test to se of the general concept would work...

This one uses one hidden neuron.
[div class=\'codetop\']CODE[div class=\'codemain\' style=\'height:200px;white-space:pre;overflow:auto\']'i1 : *.eye5
'i2 : *.refshoot *.myshoot !=

'o1 : .shoot
'o2 : .up

'Normaly the inputs would have no definitions (For several reasons), this is just for testing
def i1 50
def i2 51
def h1 52

cond
*.robage 0 =
start
0 .shoot store
stop

cond
start
*.eye5 sgn .i1 store
stop

cond
*.refshoot *.myshoot !=
start
1 .i2 store
stop

cond
start
*.i1 100 mult -50 mult 100 div .h1 store
stop

cond
start
*.i2 100 mult -50 mult 100 div *.h1 add .h1 store
stop

cond
start
*.h1 100 mult 100 div 100 div .shoot store
stop

cond
start
*.h1 -300 mult 100 div 100 div .up store
stop

cond
start
0 .i2 store
stop

end

And this one uses 2... second one does virtualy nothing though, but the idea with this one is to set some random weights and see if something usefull can evolve. It doesn't reproduce, so I'm just throwing in lots of bots with frequent point mutation to see what happens...
If you use the current values it's more likely to devolve, since the best mutations are just some increase in speed once it gets big and slow...

[div class=\'codetop\']CODE[div class=\'codemain\' style=\'height:200px;white-space:pre;overflow:auto\']'i1 : *.eye5 sgn
'i2 : *.refshoot *.myshoot !=

'o1 : .shoot
'o2 : .up

'Normaly the inputs would have no definitions (For several reasons), this is just for testing
def i1 50
def i2 51
def h1 52
def h2 53

cond
*.robage 0 =
start
0 .shoot store
stop

cond
start
'*.eye5 100 ceil 100 mult 100 div .i1 store
*.eye5 sgn .i1 store
stop

cond
*.refshoot *.myshoot !=
start
1 .i2 store
stop

'Inputs
'h1
cond
start
*.i1 100 mult -50 mult 100 div .h1 store
stop

cond
start
*.i2 100 mult -50 mult 100 div *.h1 add .h1 store
stop

'h2
cond
start
*.i1 100 mult 50 mult 100 div .h2 store
stop

cond
start
*.i2 100 mult 50 mult 100 div *.h2 add .h2 store
stop

'Outputs
'o1
cond
start
*.h1 100 mult *.h2 0 mult add 100 div 100 div .shoot store
stop

'o2
cond
start
*.h1 -300 mult *.h2 100 mult add 100 div 100 div .up store
stop

cond
start
0 .i2 store
stop

end

So far I've only run a small quick test, funny enough 2 of the bots broke the conspec, but one had broken the gene generating the input, and the other one had broken the input gene for h1 making us see an enemy all the time... as far as weight tweeking goes nothing very interesting had time to happen, and I noticed the actual weights where rarely the ones to get mutated. This is also why I'll try to keep a lot in the stack, and have as litle code as possible that isn't related to the network and it's weights... also split everything up into more genes, both for sexrepro and to increase the odds that a broken gene will just act as a dead dendrit or neuron...
If I manage to evolve something interesting from the one with random values I'll post it... but the odds are low since it doesn't reproduce and I'm generaly impatient and don't have much faith in the potential of a bot that size... it's just too smal and fragile, it needs a big network so it can realy mess it up, and generaly so mos of the mutations would affect the actual network.
I'm thinking it might be a good ide to make the program able to clean up an evolved bot, restoring broken genes outside the network, and replacing broken dendrit genes with a fresh one with 0 as a weight. But I'm also thinking that may not be possible at all, since the mutations breaking certain genes can have too many different effects, I may have to fix it by hand, if that's even possible...
My concern is mostly that the changes should realy be focused around the weights, and mutations to other genes and killing dendrits would usualy have more radical effects and therfor become rather frequent... and dendrits and broken genes won't naturaly form (Or atleast it's very unlikely)... so it may end up locking the bot in a certain direction... I think if something evolves to be worth while I could try to fix it up a litle by hand... but best try not to have too much code ouside the network...

Also I ran one test on the random bot, one bot was alive when I got back, it had broken te output gene for shoot, just replaced store with dec... was shooting all the time and moved towards enemies... was lucky to have had a few alge in sight and survived a litle longer... but not quite what I was hoping for
Not sur why it was moving forward, I'm guessing the values in the stack from the first broken output gene affected the second output gene...
This does support the theory that keeping values in the stack and splitting everything into many genes may be benefitial... still trying to figure out if theres a realy clever way to do it... possibly one allowing for new dendrits to form or merge (Or atleast increase the odds).
I can see I definately need to test more ideas before I start on the C code... also I think using C may be a stupid way of doing it... but I don't know python and pearl and generaly never realy used those syntax based string manipulating fuction things.... but maybe learning it will be easyer than trying to build it in C
I think I'll just try to find a C library with those functions... if it looks like they would be very usefull...

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Challenge #3: Neural Network
« Reply #22 on: April 09, 2008, 03:52:22 PM »
Instead of mutating the whole bot, I would do a bit of random oscillation on the weights every time a bot is born.  Just check if it's robage is 0 and if so, add or subtract a little bit from weights at random.  That way you can preserve the structure (which is extremely complex) and just mutate the weights.

Offline abyaly

  • Bot Destroyer
  • ***
  • Posts: 363
    • View Profile
Challenge #3: Neural Network
« Reply #23 on: April 10, 2008, 02:17:15 AM »
What is your thesis on?
Lancre operated on the feudal system, which was to say, everyone feuded all
the time and handed on the fight to their descendants.
        -- (Terry Pratchett, Carpe Jugulum)

Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Challenge #3: Neural Network
« Reply #24 on: April 10, 2008, 03:10:02 AM »
Yeah handeling mutations manualy would be more stable, but it would also set a strickt limit to the size of the network (Since I can only inherit 15 weights).
So right now I'm toying with the idea of keeping as much as possible in the stack at all times, like 15 hidden nodes at the time, then empty them into vars to make room for new values...
I'm hoping this would cause broken dentrits to often affect new neurons... but not sure if hat would be better or worse
Anyway if the structure completely falls appart I think it should be ok, if I want to restore some potential in an evolved network I'll just have to add a second network to handle whatever has gotten lost.
Anyway it's unnatural for a neural network to have dendrits going from evey input to every neuron asf... so if something breaks og merges it would only be natural...
Anyway made a version using some tranformation for the input and output values, but I think I scaled everything wrong, lost track along the way
I also think maybe splitting it up may have a harmmfull effect, gotta find a balance between a good setup for sexrepro and least harmfull mutations that disable genes.
And as far as I can see a large part of the network will always be the part transforming input and output values, so those would often get mutated... but then again I think ti would correspond to just another broken or merged dendrir... I'm mostly concerned that in average for one mutation of a weight 1 or 2 dendrits would get broken, so a bot needs to be rather lucky to achieve some weight tweeking without also having lost some dendrits. I'm thinking of maybe using the epigenetic locations to save some weights that have a more dramatic effect, to cause some more frequent tweeking of those values...


[div class=\'codetop\']CODE[div class=\'codemain\' style=\'height:200px;white-space:pre;overflow:auto\']
def h1 52
def h2 53

cond
*.robage 0 =
start
0 .shoot store
stop

'Inputs
'h1
start
*.eye5
stop

start
40 mult
stop

start
.h1 store
stop

cond
start
100
stop

cond
*.refshoot *.myshoot =
start
0 mult
stop

start
40 mult
stop

start
*.h1 add
stop

start
.h1 store
stop

'h2
start
*.eye5
stop

start
-10 mult
stop

start
.h2 store
stop

cond
start
100
stop

cond
*.refshoot *.myshoot =
start
0 mult
stop

start
100 mult
stop

start
*.h2 add
stop

start
.h2 store
stop

'------------------------ Outputs
'--- o1
start
*.h1 100 mult
stop

start
*.h2 0 mult
stop

start
add
stop

start
100 div
stop

start
100 div
stop

start
sgn -
stop

start
.shoot store
stop

'--- o2
start
*.h1 1 mult
stop

start
*.h2 1 mult
stop

start
add
stop

start
100 div
stop

start
*.maxvel mult
stop

start
100 div
stop

start
.up store
stop

end


Also about the thesis, it was initialy suposed to be an experiment to use neural networks in a game AI, but realized I didn't have the inputs I needed, so now it's a system for mining and baking data for the AI  Will have to work on the AI afterwards
(Also I wasn't sure if it was realy worth putting that much effort into it, if you spend that much time on a feature you want to be sure it's something people will notice and eppretiate... Damn customers... the data mining feature can also be used for a lot of other stuff)

And while I'm typing anyway, I think I hit a strange bug with this thing, I was running veggies that gained like 40 nrg per kilo, so they got realy big and gained a lot of energy all the time. It then seemed like once killed they didn't always disapear (Like they never got removed from the bucket), and since the bot only moves forward it pretty much got stuck everytime it was looking at a non existing veggy. I'll make a propper bug report later today if I have the time.

Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Challenge #3: Neural Network
« Reply #25 on: April 11, 2008, 04:16:41 PM »
Ok... this is another hand made neural network... kan aim, shoot, move forward and reproduce...
Tryed to adjust the values manualy, so it's pretty stiff for a neural network...
And it's still too small... and I realy don't think it'll evolve well either...
Added a few normal genes at the end to get sexrepro in there, and some body regulation to help survival, it can manage without it, but it helps a litle...

I'm realy hoping to be able to push more or less raw values all the way through the network, like using the raw angle value to set the aim... but it might be better to keep everything binary, so either something happens or it doesn't... but that would basicaly mean a trained bot ould still have all it's original actions but have it's conditions trained into the network... this would realy limit the possibilities.
I was hoping a larger network might evolve some adjustment to aim using the refvelup and refveldx inputs... stuff like that...
I guess I'd have to set it up to be possible to coustomize the transformation of inputs and outputs anyway... not sure what the best way to handle this is.
Also figured I could have used mod to store decimal values, but it was a lot easier just to scale everything...

Not sure why I split up some of it and not the rest... I know it looks wierd... got confused...

[div class=\'codetop\']CODE[div class=\'codemain\' style=\'height:200px;white-space:pre;overflow:auto\']
'Another neural network test
'Just trying to figure out whats possible

'i1 : *.eye5
'i2 : *.refshoot *.myshoot !=
'i3 : *.refxpos *.refypos angle
'i4 : *.body
'-------
'o1 : .shoot
'o2 : .up
'o3 : .setaim
'o4 : .repro

def maxangle 1364
def maxeye 100
def maxbody 2000
def bias 1

def wi1h1 10
def wi2h1 200
def wi3h1 0
def wi4h1 0

def wi1h2 -10
def wi2h2 400
def wi3h2 0
def wi4h2 0

def wi1h3 0
def wi2h3 150
def wi3h3 0
def wi4h3 0

def wi1h4 0
def wi2h4 0
def wi3h4 500
def wi4h4 0

def wi1h5 0
def wi2h5 0
def wi3h5 0
def wi4h5 500

def wh1o1 500
def wh1o2 -200
def wh1o3 0
def wh1o4 0

def wh2o1 0
def wh2o2 50
def wh2o3 0
def wh2o4 0

def wh3o1 0
def wh3o2 300
def wh3o3 -100
def wh3o4 0

def wh4o1 0
def wh4o2 0
def wh4o3 500
def wh4o4 0

def wh5o1 0
def wh5o2 0
def wh5o3 0
def wh5o4 500


def h1 51
def h2 52
def h3 53
def h4 54
def h5 55



cond
*.robage 0 =
start
0 .shoot store
stop

start
.bias .h1 store
.bias .h2 store
.bias .h3 store
.bias .h4 store
.bias .h5 store
stop



'********** Inputs
'======= h1
'--- i1
start
 *.eye5
stop

start
 200 mult .maxeye div 100 sub
stop

start
 .wi1h1 mult
stop

start
 100 div
stop

start
 *.h1 add .h1 store
stop


'--- i2
start
 *.refshoot *.myshoot sub abs sgn 200 mult 100 sub
stop

start
 .wi2h1 mult
stop

start
 100 div
stop

start
 *.h1 add .h1 store
stop


'--- i3
start
 *.refxpos *.refypos angle .maxangle mod
stop

start
 200 mult .maxangle div 100 sub
stop

start
 .wi3h1 mult
stop

start
 100 div
stop

start
 *.h1 add .h1 store
stop


'--- i4
start
 *.body
stop

start
 200 mult .maxbody div 100 sub
stop

start
 .wi4h1 mult
stop

start
 100 div
stop

start
 *.h1 add .h1 store
stop

start
 *.h1 5 div .h1 store
stop



'======= h2
'--- i1
start
 *.eye5

 200 mult .maxeye div 100 sub

 .wi1h2 mult

 100 div

 *.h2 add .h2 store
stop


'--- i2
start
 *.refshoot *.myshoot sub abs sgn 200 mult 100 sub

 .wi2h2 mult

 100 div

 *.h2 add .h2 store
stop
 

'--- i3
start
 *.refxpos *.refypos angle .maxangle mod

 200 mult .maxangle div 100 sub

 .wi3h2 mult

 100 div

 *.h2 add .h2 store
stop


'--- i4
start
 *.body

 200 mult .maxbody div 100 sub

 .wi4h2 mult

 100 div

 *.h2 add .h2 store
stop

start
 *.h2 5 div .h2 store
stop



'======= h3
'--- i1
start
 *.eye5

 200 mult .maxeye div 100 sub

 .wi1h3 mult

 100 div

 *.h3 add .h3 store
stop


'--- i2
start
 *.refshoot *.myshoot sub abs sgn 200 mult 100 sub

 .wi2h3 mult

 100 div

 *.h3 add .h3 store
stop


'--- i3
start
 *.refxpos *.refypos angle .maxangle mod

 200 mult .maxangle div 100 sub

 .wi3h3 mult

 100 div

 *.h3 add .h3 store
stop


'--- i4
start
 *.body

 200 mult .maxbody div 100 sub

 .wi4h3 mult

 100 div

 *.h3 add .h3 store
stop

start
 *.h3 5 div .h3 store
stop


'======= h4
'--- i1
start
 *.eye5

 200 mult .maxeye div 100 sub

 .wi1h4 mult

 100 div

 *.h4 add .h4 store
stop


'--- i2
start
 *.refshoot *.myshoot sub abs sgn 200 mult 100 sub

 .wi2h4 mult

 100 div

 *.h4 add .h4 store
stop


'--- i3
start
 *.refxpos *.refypos angle .maxangle mod

 200 mult .maxangle div 100 sub

 .wi3h4 mult

 100 div

 *.h4 add .h4 store
stop


'--- i4
start
 *.body

 200 mult .maxbody div 100 sub

 .wi4h4 mult

 100 div

 *.h4 add .h4 store
stop

start
 *.h4 5 div .h4 store
stop


'======= h5
'--- i1
start
 *.eye5

 200 mult .maxeye div 100 sub

 .wi1h5 mult

 100 div

 *.h5 add .h5 store
stop


'--- i2
start
 *.refshoot *.myshoot sub abs sgn 200 mult 100 sub

 .wi2h5 mult

 100 div

 *.h5 add .h5 store
stop


'--- i3
start
 *.refxpos *.refypos angle .maxangle mod

 200 mult .maxangle div 100 sub

 .wi3h5 mult

 100 div

 *.h5 add .h5 store
stop


'--- i4
start
 *.body

 200 mult .maxbody div 100 sub

 .wi4h5 mult

 100 div

 *.h5 add .h5 store
stop

start
 *.h5 5 div .h5 store
stop


'in
'-----------------------------------------------------------------
'out


'********** Outputs
'======= o1
start
 *.h1

 .wh1o1 mult

 *.h2

 .wh2o1 mult

 add

 *.h3

 .wh3o1 mult

 add

 *.h4

 .wh4o1 mult

 add

 *.h5

 .wh5o1 mult

 add

 100 div

 5 div

 sgn 1 mult - 0 ceil

.shoot store
stop


'======= o2
start
 *.h1
stop

start
 .wh1o2 mult
stop

start
 *.h2
stop

start
 .wh2o2 mult
stop

start
 add
stop

start
 *.h3
stop

start
 .wh3o2 mult
stop

start
 add
stop

start
 *.h4
stop

start
 .wh4o2 mult
stop

start
 add
stop

start
 *.h5
stop

start
 .wh5o2 mult
stop

start
 add
stop

start
 100 div
stop

start
 5 div
stop

start
 *.maxvel mult 100 div
stop

start
 .up store
stop



'======= o3
start
 *.h1
stop

start
 .wh1o3 mult
stop

start
 *.h2
stop

start
 .wh2o3 mult
stop

start
 add
stop

start
 *.h3
stop

start
 .wh3o3 mult
stop

start
 add
stop

start
 *.h4
stop

start
 .wh4o3 mult
stop

start
 add
stop

start
 *.h5
stop

start
 .wh5o3 mult
stop

start
 add
stop

start
 100 div
stop

start
 5 div
stop

start
 100 add 2 div .maxangle mult 100 div
stop

start
 .setaim store
stop




'======= o4
start
 *.h1
stop

start
 .wh1o4 mult
stop

start
 *.h2
stop

start
 .wh2o4 mult
stop

start
 add
stop

start
 *.h3
stop

start
 .wh3o4 mult
stop

start
 add
stop

start
 *.h4
stop

start
 .wh4o4 mult
stop

start
 add
stop

start
 *.h5
stop

start
 .wh5o4 mult
stop

start
 add
stop

start
 100 div
stop

start
 5 div
stop

start
 sgn 50 mult 0 floor
stop

start
 .repro store
stop


'----- some regular genes, it does fine without them, but they help a litle. (Not going to increase this network...)

cond
*.nrg *.body 3 mult >
start
100 .strbody store
stop

cond
*.fertilized 5 >
*.nrg 1000 >
*.body 200 >
*.robage 50 >
start
40 .sexrepro store
stop

cond
*.robage 300 >
*.nrg 1000 >
*.body 500 >
*.kills 5 >
*.eye5 0 >
*.refshoot *.myshoot =
*.robage 350 mod 300 >
start
-8 .shoot store
*.refxpos *.refypos angle .setaim store
*.refvelup 5 add .up store
stop

end


Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Challenge #3: Neural Network
« Reply #26 on: April 12, 2008, 07:44:31 AM »
The NN posted above isn't that good, did some tweeking and stuff to try and devolve it in a good way...
But with the transformation of inputs and outputs the whole structure gets more complex...

I've been trying to figure out how to keep everything in the stack all the time, and the only ways I can think of involve calculating the hiden nodes several times (All nodes for each output), and the point here is to get faster evo sims, and the network is already slowing things down by being big...
So I definately need to use vars, but I'm thinking of skipping the first hidden layer, and save the value for the output into a var and then transfer it into the output at the very end.
This way I would only need one var for every output, and if I cut out transformation and just use raw values I could actualy limit the whole thing to follow this method : (i1 and i2 would just be raw inputs)
*.i1 .wi1h1 mult *.i2 .wi2h1 mult add *.o1 add .o1 store
and so forth, pushing the bias directly into the output vars.
This way it's more likely that a mutation would change something relevant...
It would still need to be scaled up and down, but I figure this could be done by having realy large weights and the scale everything down before it's saved in the output var. (I think I remember seeing an ^ operator, so it shoould be possible to scale everything down without adding too much.
My concern is that raw values multiplied with a weight and added together are likely to exceed the cap of 32000 even when scaled down.... so I'm not sure what kind of effect that would have. I was thinking of maybe capping the values, so exceeding a cap will keep the max value instead of messing up the input....
But then I'd be adding more stuff again... and who knows... maybe a network could actualy exploit it in a good way...

My biggest problem so far with raw values is that -1 in shoot is an attack, positive values are information... -6 is also an attack, -3 is poison, also an attack... but -2... shoots ebergy... which is fine with the normal settup, but with weights regulating the outputs... I'm afraid bots wouldn't be able to combine several shot types this way without constantly firing energy shots... so I may have to convert certain outputs to be binary, we either shoot -1 or -6 or -3 or -4... not as dynamic... but probably a lot more stable.

I'm still not happy though, if I put a big network together it may have better odds at achieving something usefull with fewer mutations, and possibly be able to evolve from a better base with more stable evolution... but even if this is true, the size of the code would reduce it's speed, so even if it realy does work bettter than random mutations, it's not even sure it'll be any faster... the advantage is the way nothing is set in stone in the network, every action is triggered by a mix of inputs, so theoreticaly it should be a better evo base... downside is the size of the network grows exponentialy for every input, output and hidden node you add, and to have a propper NN evo base you need a large network...
Basicaly saying I'm falling out of love with this idea lately... unless I can see some progress from the test network at some point then I may drop this idea, atleast for a while...

Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Challenge #3: Neural Network
« Reply #27 on: April 15, 2008, 04:14:52 PM »
I tryed running a sim with the last network I posted (NNOne.txt)
Only with point mutations but a higher probability...
Was running size 2 with thin liquids and normal F1 setting (F1 costs, toroidal borders, no bodies asf), except the veggy cap was 500 and repoped at 60 by 70 veggies...
After about 150 mutations it started to form "fans" and sweep up the veggies. (Probably due to the abundance of food)
It seems like it's only aiming in one dirrection forms fans and shoots...
It's not realy doing much with the network though.
I guess the hand made weights are also making it too stiff to realy evolve propperly.
Although, before "devolving" it maintained an average of 50-150 bots, in the end it was averaging above 1000, so it's not like it didn't get better, it just didn't put the network to much use.

I'm still considering making it binary, using mostly raw inputs and custom output actions, then training all the conditions of a handmade bot into the weights.
This way mutations could slowly break down and tweek conditions without doing too much damage.
But it still limits the possibilities, so I'd realy like to have more raw outputs...

Since the network didn't entirely flop, and actualy managed to get better in a shorter amount of time, I made a new network with no transformation, using only raw inputs...
Made som hand made values again to get it started, so it's still stiff, but I only set weights for the first 4 inputs and outputs then expanded it with a lot of empty network... it also has a different bias for every neuron since evolution seems to have that effect anyway.
I'm hoping to see some kind of propper use of the network... any kind at all... if I can get it with a network with handmade weights, a propperly trained network should have even better chances.
I ran the new network for a short while, and it seemed to trigger shell rather fast, but I'm pretty sure it just got an inc somewhere or something like that...

I added some bots and a sim, the sim is the last save from the last bot (NNOne.txt).
Theres also the base bot used and 2 of the last evolved bots.
And also the new networks, the first one using only 4 inputs and outputs and the second one identical but with a larger empty network.

Offline d-EVO

  • Bot Destroyer
  • ***
  • Posts: 125
    • View Profile
Challenge #3: Neural Network
« Reply #28 on: November 21, 2008, 06:32:03 PM »
Im gona give this a shot.

just grappling with this neural network concept so I made this very simple bot. just to start

can it be classified as a neural network

and if eye5 and refeye are the only imputs you can use why are people entering bots that use other eyes?

1:      2 is true
2:      1 is false

Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Challenge #3: Neural Network
« Reply #29 on: November 22, 2008, 06:47:14 AM »
Well I guess you could call it a handauthored binary neural network, in the sence that is stores values in hidden neurons.
And I would say using condition logic as input is ok, just as using your output in a conditions doesn't violate any "rule"...
But I think using condition logic in the middle of the network kind of makes the network redundant... then you could just have put the input straight into the final conditions without storing anything anywhere.

Usualy any input would be sendt to all the hidden neurons in the first layer (This is however not a rule, it just leaves more options for the network to adjust when using backpropagation og random mutations).
You would also normaly multiply the input with a value (A weight) wich would be specific for every conection. So every time a value travels through a dendrit it will be modified according to the "thickness" of the denrit...
Then you add up all the inputs for a neuron and move on to the next one

So basicaly what I'm saying is that you could consider the structure of your bot as a neural network, but you're not realy getting any of the advantages of a neural network from this.
Normaly you would use the weights to turn the input negative for some neurons in order to make sure the right ones fire at the right time.
This is however pretty hard to adjust manualy... I was toying with the idea of making a small script that would generate most possible inputs and outputs from a regular bot, and then use those values to train a network usnig BP... but atm my spare time is going to the DB mod for forming networks structures instead of random mutations... think that project is more interesting in the long run.

And you can use any eye you want, but all the refvars (like refeye) will be coming from your focus eye, and as default your focus eye is eye nr 5, but you can change this.
You can also just use eyef wich will always be the eye you're using to focus with. The other eyes will still show if they see something, but you need to change focuseye, or turn towards them to know if it's a friend or foe... and you can't change focuseye mid cycle, if you switch focuseye then it will have an efect next cycle. This is also why some fancy eye systems end up letting you turn back and forth between 2 friends... and that's also the reason why the eyes in spinner offset all the eyes to the left and uses eye9 as a focuseye so it's always turning in the same direction and doesn't get stuck turning back and forth. turning is free in F1 so spinning around yourself has no costs and it's an effective way to locate any incoming enemies fast.

The sysvars eye1-eye9 will just show if they see anything and how far away it is, while eyef shows the same thing but instead of being a specific eye it's the value from the eye you've selected as your focus eye. So if you didn't change your focus eye then eye5 and eyef will be the same, but if you change your focus eye a lot, then eyef can make it easier for you to write the code you need.
If you see something with your focus eye then all the refvars will be filled out with usefull information, like refeye and refshoot which tell you how many eye and shoot commands the target has in it's DNA, so you can compare with your own DNA and see if it's one of your own kind. You can also see wich way the target is moving and how fast and use those values to match the oponents movements. You also get the oponents coordinates, making it easy to find the angle and set your aim to point exactly at the oponent. (You will get the coordinates for the next cycle, wich means you don't need to worry about where the guy is moving to). It's very easy to track an oponent :
*.refvelup .up store 'Match the oponents forward movement
*.refvelsx .sx store 'Match the oponents sideways movement
*.refxpos *.refypos angle .setaim store 'Aim at the next location where your oponent will be

With this code the only way for the target to get away is by crossing a toroidal border or if something blocks your vision, or maybe if something rams into you and pushes you away.
You may also want to have a higher .up value than your oponent or you'll never get any closer.
So maybe something like :
*.refvelup .up store
*.refvelsx .sx store
*.refxpos *.refypos angle .setaim store
*.eyef 30 >
*.maxvel .up store

This is all you realy need to track an oponent. It's my impression that it wasn't always this easy, but now it is, so no reason to make it complicated.

Also if you don't want to get sidetracked with strange eye systems, then just use a wider eye5. Like 300 .eye5width store will make you eye5 about as wide as all your regular eyes together (But the range will also be shorter)
« Last Edit: November 22, 2008, 07:11:57 AM by Moonfisher »