Author Topic: Neural nets and mutations  (Read 4998 times)

Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Neural nets and mutations
« on: April 15, 2008, 05:09:17 PM »
Is there a way to set point mutations to affect values far more often than opperators ?
I'm trying to evolve a neural network with point mutations, but it keeps breaking down the structure.
For instance if it produces no shell the output for shell could mutate it's store to an inc, and this would have a more noticable effect.
This would help the new bot survive but would also destroy the network leading to that output and prevent the chances for a more balanced shell management to evolve through the network.
I'd basicaly like to make the point mutations only affect values and not operators, if possible, or atleast balance it to something like 1 change of an operator for every 10000 value mutations.
It would also be usefull for evolving league bots to tweek their constants.
So if you're going to be messing with mutations anyway  

Offline Testlund

  • Bot God
  • *****
  • Posts: 1574
    • View Profile
Neural nets and mutations
« Reply #1 on: April 15, 2008, 05:36:08 PM »
There's a slider on the mutations dialog where you can set the percentage for type and value for both Point Mutations and Copy error, if that's what you mean.
The internet is corrupt and controlled by criminally minded people.

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Neural nets and mutations
« Reply #2 on: April 15, 2008, 06:00:37 PM »
That wouldn't help prevent store turning in to an inc, though.

Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Neural nets and mutations
« Reply #3 on: April 16, 2008, 03:13:59 AM »
Yeah I tryed fidling with that slider, but it doesn't seem to make any difference.
Problem is it seems that values and operators are regarded as the same thing, but in a neural network the values are the thickness of a connection, but if an operator is changes it'll most often correspond to a broken connection...
So when evolving a neural network you'll break pieces in the network about as fast as the weights are mutated, so theres litle to no chance for propper values to evolve before the network is completely destroyed by the mutations.

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Neural nets and mutations
« Reply #4 on: April 16, 2008, 04:03:45 AM »
As far as the code is concerned, a store is a type StoreCommand of value 1.  Whereas a number is of type Number with value 56 (or whatever).  So when you say only to change values, to the program that just it won't change a 56 in to a store.

Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Neural nets and mutations
« Reply #5 on: April 16, 2008, 07:58:18 AM »
Yeah I kinda figured it was something like that...
I could just use more controll though. So I could completely lock the mutations of storeComands, or atleast set them to have far lower odds than mutations of values...
It doesn't realy seem like the 2 kinds of mutations are that related to me, they have very different impacts on the bot they act on. Changing a value should often change the range for some condition og the strength of shots, or repro size and stuff like that, where changing StoreComands will usualy have a drastic and often very negative effect, like suddenly reproducing with 1% of your nrg evey cycle, og eliminating the condition, or breaking all the conditions in a gene or disabling shot boosts....
It just doesn't seem well balanced, it seems like mutation of StoreComands would be something that took a lot of mutations in nature. I don't know how this works in nature, but nature isn't code, so I don't think it would break down so easy, and I think single point mutations wouldn't have such destructive effects...
The way I imagine it a mutation of an operator would only move it slightly in one direction, getting close to another operation but still remaining the same...
So an add would have to mutate like 20 times in the same dirrection to become a sub, and 200 times in a certain direction to become a mult....
Basicaly operators would have a whole range of possible values and you would place the ones related  near each other, and have wider spaces for fragile operators like store, stop, start and cond....

That or just the ability to turn off operator mutations would be very usefull for my purposes  So if EricL is messing with mutations anyway, a checkbox to disable mutations of operators would be a sweet adition.

Offline EricL

  • Administrator
  • Bot God
  • *****
  • Posts: 2266
    • View Profile
Neural nets and mutations
« Reply #6 on: April 16, 2008, 11:51:10 AM »
I beleive it does indeed work this way in nature at one level  I.e. small changes in gene coding can have large impacts on functionality and expression.  A single base pair change can create a stop codon in the middle of a gene for example, which is a fine way to disable the gene.  I'm no genetisist, but I do not think the adjacency space for coding DNA maps smoothly to the fucntional protein space.  Said another way, I do not think one can make stong predictions along the lines that small changes in coding DNA result in small changes in protein structure and that large changes result in large changes.  Sometimes this is case I imagine but I think mostly only in cases where there is correction logic at the  transcription level that ignores long run repeats and such.  If a mutation really changes a codon so that it codes differently, I think the resulting change in the protein can be large or small.    

Biological organisms appear more resilient to single nuclitide polymorphisms becuase they have evolved higher level redundency above the coding level.  Often multiple genes or even whole gene families work together and overlap in what the do.  The loss of one gene may not be serious.  Additionally, coding DNA requires context.  A gene that suddenly codes for a new protein may not work well with the promoters and other triggers for it's production and thus it's expression may be naturally supressed.

One can argue I suppose that the DB adjacency space is smooth relative to the resulting functionality in cases where the base pair in question represents the value being stored to a memory location  I.e. that a small change in a value of a sysvar results in a small change in functionality for most memory lcoaitons.   This is not the case however, for many other base pairs.  An obvious example is a change to the location to which the store is being performed.  So, simply restricting or greatly reducing the probability of mutating base pair types may not achive what you are after.  Small changes in base pair value can still result in huge changes in the resulting functionality.

I understand what you are trying to do with the neural net, but I'm not really inclined to add a lot of special case functionality in the simulator for things that are kind of out of the black w.r.t. the main line focus of DB, particuarlily when I think they should be done differently.  If you're trying to build plasticity into a single cell organism, then I don't think you need a net for that.  I think the right way to do it is with inherited values that determine high level functionality in a non mutating environment.  LionFish for example has like 20 different private values which govern it's high level behaviour.  It mutates them itself from generation to generation and lets selection work at that level.  It evolves and adapts to changing environments and competitors within it's design limits by changing these values generation to generation without mutations being enabaled.  It's behaviour is plastic.

If you really want to build a net, then I think the nodes in that net should be whole bots.  I'd be very intereted in adding capabilities to improve bot-bot communciaton...

Bottom line, I don't think a system that uses the simulator mutation code to change edge weights will work even if we made the changes and restricted the mutation space as you suggest.  I think you should mangage your own network weighting in your DNA with mutations turned off completely, either within a single bot ala Lionfish or with mutliple cooperating bots acting as nodes in a much larger net.

That said, if you still really want the mutation space restriciton options, I will put them on the list if for no other reason then it is a requested feature that would be simple to impliment.
« Last Edit: April 16, 2008, 11:51:50 AM by EricL »
Many beers....

Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Neural nets and mutations
« Reply #7 on: April 16, 2008, 01:46:56 PM »
You're probably right, a neural net in nature consist of many cells working together and it's not how single cells work.
But a multicellular network would not work in DB, since it's probably something that evolved from the sensory network in larger organisms, from having simple reactions to pain receptors to forming more complex reactions from more inputs and finaly to actual thoughts. (I know a brain is more than just a neural network, but octopuses have no brain and still seem capable of more "complex" thoughts.)
I can also understand that a high amount of redundant code and the cooperation of several genes will limit the damage caused by mutations.
I did try to split up the network into a lot of genes, but that just destroyed the network more often.
The problem is that for every weight I have and input an add and a mult and an output... and only value mutations have a slight chance of doing something usefull with the network, where the operator mutations have a high chance of breaking the structure.
So it only has a 1 in 5 chance of having a very small chance of getting a usefull mutation, and it would take several usefull mutations to actualy get anywhere.

Locking mutations would limit the network to just 20 weights... this means I would face the choice of severely limiting the size of the network, or eliminating certain connections that seem to have no effect, or have a trained network and pick certain weights with a larger impact and change only the key weights.
I could also set a lower cap for the weights allowing me to use the binary opperators to store 2 different weights in one location. (But with a faily low cap on weights)
Or I could keep a small offset along with a weight modifying value in one location, making it possible to change any weight in the network, but still limited to changing only 20 different weights (Only advantage would be the network could prioritize and sacrifice certain weight changes for others).
Last idea is probably the one I like best, but still not realy as flexible as I had hoped.

The best I can think of is building the weight from many genes, 20 genes adding 5 and 20 genes withdrawing 5 for instance.
This way the odds of changing the weight would be a lot better.
The downside is that I have a LOT of weights, so multiplying that amount by 40 would make the bot very long... very very long...
And the point was to get faster evo results, so I'd rather not slow down the sim too much...

And you're also right that locking operator mutations wouldn't prevent the network from breaking down, however the mutations caused to the structure even when harmfull would still have a small chance of doing something usefull and not completely destroy the structure.
Also the absence of inc and dec would reduce the odds of shell starting to form from a mutated store rather than something in the network, creating the posibility for the shell amount to be regulated later on.

But breaking down weights into several genes would probably be more efficient than locking operator mutations, although in the long run it might get harder for the weights to adjust as the genes have been broken down...

I think the best setup for my purpose (And I realize it's doesn't quite fit cingle cell evolution) would be to do both, break down each weight into 4 or more values to increase the odds of weights changing vs the chance of the mem locations changing, and... locking mutations of operators  
Even broken nodes can still be usefull since they are still modified by some weights before reaching an ouput.
I know it's not the right wayto do it, it's kind of cheating at evolution... but then Darwin will just have to sue me, I want the feature    !
(If it's not too much trouble)

I'm also going to try to tweek league bots with it  but it's hard to improve league bots this way since you need a steady flow of enemies to join the sim.
Best way I've found so far is adding another league bot as an alge and have a very high veggie cap and no energy gain, and give alga minimalis some more starting energy. The problem is that this will only allow the bot to evolve against weaker bots... but it might still be enough to tweek certain values.

Also the whole redundant code thing you described isn't far from what a NN is doing, random values and inputs getting scrabled together to create some sort of output... I'm just trying to nurture the structure
Maybe I'm overprotective, but I'm very impatient and I want good evo results NOW !

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Neural nets and mutations
« Reply #8 on: April 17, 2008, 12:47:00 AM »
Set up a birthing process, where the mother bot "teaches" the baby bot using the in/out of several cycles.  It wouldn't be foolproof or automatic, but it would let you use more weights.

Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Neural nets and mutations
« Reply #9 on: April 17, 2008, 03:21:15 AM »
Hmm, I think that would be enough, it takes 15 cycles to transfer epigenetic memmory anyway, could transfer atleast 150 extra weights.
Wouldn't need an offset and cap for the weights, could just send them in some fixed order and have an end value to indicate when we're done, or just break the tie...
I think birthties break automaticaly after 15 cycles, even if you try to stiffen them. But 170 weights is a lot more to play around with... gonna have to figure out how many inputs and outputs and neurons that would allow, once I'm fully awake...
It would definately be a good tool for a plastic network, one using less raw inputs and outputs, transforming everything into a certain range of values... so the structure could be more complex.
The flexibility and possibilities would still be predictable and fairly limited though... since inputs and outputs can't change...
But it's definately a good point, didn't even occur to me to use tout1-10 during birth. Worst that can happen is that the tie breaks and a baby is let go prematurely...
What bugs me though is that cani's often eat their young, and the young always spawn at the front, like a snack table... so newborns need to get clear of their parrents fast, but the birthtie would make that hard... I'm hoping it would still be possible, but even with bots that break birthties right away you'll often see them eat 2-3 of their own young before successfully reproducing.
I guess the base network will need to have a conspec, but if the network is buildt this way I should probably transform inputs and output, and have a lot of more binary outputs, so the whole thing would be less "natural" anyways...

Still not a huge amount of weights though... it wouldn't be enough for 10 inputs 10 hiden neurons and 10 outputs... and the networks I'm fiddeling with now are already bigger (And not big enough).
But it's definately worth thinking about some more, if the network transforms values it can have a lower weight range and allow 2 weights per mem loc, bringing it up to 340 weights.
Although capping weights would also create more limitations, if it's even possible, seen weights go way off the scale in a network...
I'll think about this when I have some time  work calls.

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Neural nets and mutations
« Reply #10 on: April 17, 2008, 03:41:58 AM »
You could have young shoot an info shot that makes the parent turn 180 degrees around.  Then the parent wouldn't eat the young, but it could still use the outs to communicate.

Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Neural nets and mutations
« Reply #11 on: April 17, 2008, 07:57:36 AM »
Heh that's actualy what I'm doing in the latest network I made... it doesn't always work though, but it helps.
Also I've been toying with the idea of using tout, and I'm thinking maybe the bot could have 170 weights it transfers at birth, the most important ones, but still change other weights aswell, and only transfer them if the changes are significant and we're doing well... just pick a random bot that doesn't seem to be doing as well as us and give it the extra values... so you could transfer 5 extra weights per cycle along with an offset for each without having to cap the weights or anything like that.
I think ti could work... but I have no idea what memmory locations to use.... if I had what I would call a very small network fo 10 inputs, 10 hiden neurons and 10 outputs, then I would ahve 200 weights and the 10 nodes... so 210 memmory locations... for a small network... doesn't scale very well. 210 is already too much for the range that is garanteed not to change, and probably too much for me to find a single range able to keep all the weights, would need to be split up into different areas....
Appart from that I don't think I can define that many locations (Not that I would want to at that scale anyway)... but generaly this means I'd have a LOT of memmory locations to manage, manualy...
It's possible... but not easy, and not foolproof either, would need a lot of safety in case birthtie breaks, good default values, ability to indicate you're an unfinished bot so others can fix you, evaluating if more weights should be transfered... think it's could work well... and with the whole thing locked from mutations the structure could be more complex and there could be plenty of helping genes and generaly more coustomized mutations of weights....
But it wouldn't be easy to put together though... possibly kinda fragile... and not very flexible... but better controll...
Either way it's too big a project for me to start on at this time...

Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Neural nets and mutations
« Reply #12 on: April 22, 2008, 02:24:27 PM »
I'm actualy toying with the idea of making a small mod with my own NN mutation.
The idea would just be to let mutations randomly create nodes and randomly connect inputs to nodes and nodes to other nodes or outputs...
And ofcourse mutate the weights and occasinaly break connections.
It would basicaly just generate the code for a neural network from scratch, only forming the connections that are needed.
I know this probably has nothing to do with single cell evolution, but I'm a big fan of neural networks, and I think this could be a very interesting way to evolve a bot. It would just be a mod for myself and anyone interested anyway.
And it's not that far from what the current code mutations are doing, maybe kinda far but not way off, it's in nature atleast
But I'm still just toying with the idea, it still needs work, thinking the inputs and outputs should be able to evolve past raw values, so hopefully something like *.refxpos *.refypos angle could become an input if they're very lucky... and some outputs may have to be binary, like firing a certain shot type, but still trying to figure out how this could arise via mutations...
« Last Edit: April 22, 2008, 03:09:15 PM by Numsgil »