in simple neural nets can be seen as a layer of dots (memory locations)
most often you see 3 layers of just a few dots
draw lines between dots and call them weights.
so a single dot on the second row does do (sum each dot in the row above (and for each dot .multiply its input with its weight line).
Depending on the results of single a neuron it adjusts the weights. (is it always wrong, then they become close to zero, is it oposite they become negative).
In the beginning the neural net is randomly seeded with random numbers, sigmoid calculations are done to balance the network.
And then with some hope and some clever thinking we might get it into DB (currently i've seen no sample of a bots using sigmoid functions, some claim to have but i doubt them)
here a not to complex read
http://arxiv.org/ftp/cs/papers/0308/0308031.pdf There are several ways how it can work (and this would be something we have figure out (comparable to evolving zerobots)..
As you can imagine all weights and all dots can compare lots of memory, so to keep the program fast i think not all cells do require such large amounts of memory.
Back to neural nets.
Normally this math works wit real numbers (maybe floats) and rarely with short integers (maybe an int type is enough). ..
types >>As you might notice this requires some memory so it would be better to have another cell type (so not all of the cells would require large amount of memories)
So a brain cell can contain more code, and process more advanced commands.
Also eye cells and movement cells, shooting cells, posion storage, shell cells.. could be smaller (and simply have input or output, and some energy demands and a connection(location?).
I know thats radical different from DB in which a single cell can do everything, and has potentially all kind of information even if it doesn't process them.
Note for all the things that could be done with 1 or 0 you can use binary math (XOR OR AND etc) and so limit memory usage of them.
Although one bit can store only on /of 2 bits can store (00 01 10 11) for states (4 = 2^2 and 3 bits can store 2^3 = 2*2*2 = 8 states.
By us
In DB there areas are like the *.970 area (given to child by birth). (to keep children learn), something like that should also apply i think (so you get spock like brain transfers at birth).
The easist way might perhaps be to have a cell type wit a double adres range, one that is not poluted with all kind of commands
Note the math required actualy for doing this and the training of the neural net, is something which is another topic and a chalange like numsgill chalange for a zerobot using conditional behaviour.
But once we got a small neural network working, it can be exanded..>effectivly becoming smarter.
At ofcoure higher energy costs, as our brains consume the most our bodies energy..
Dont think of to complex neural networks for a starter a single bot with 3 layers and 20 dots totall, a minimum would be i think about (maybe first layer 5 next 3 next 4 >> 12 dots)
It doesnt require like 50 dots, because also the braincells could be connected to other brain cells.
Another method could be single neuron cells, but then we would require something like a species editor, to connect each cell correctly to make a neural network.
That way they consume less memory but bot creation becomes a new topic of discusion i mean how to do it in DNA, or do it without DNA ?..string discription of species maybe... ??..