Hey everyone,
It's been a while since I posted around here, but at the end of the summer a bunch of crazy things happened in my life and I didn't have time to indulge my interest in programs like DarwinBots. Lately I've been working as a Java software developer though, which is an awesome transition from my previous work as a writer. It's nice to be back in the field even though I have an English degree.
Anyway, I've been doing a lot of research and reading academic papers on reinforcement learning techniques, and I was thinking about something Numsgil said when he explained that he thinks that artificial neural networks are more accurately described as weighted cascading gate mechanisms (or something similar). I completely agree, and in fact I only began to understand them more thoroughly when I stopped thinking of them as neural networks and instead thought of them as such.
But I was just brushing up on some math, and I realized it might be really cool (though not necessarily computationally efficient) to model an artificial neural network in a different way, based on something like real physics. (Not biochemistry or anything, just simple physics.)
Imagine a neural network where each node is modeled as a point in space (let's go with 2-dimensional space for now) from which a vector is extended down (toward the next layer of nodes). Data is transferred to the next layer by calculating the proximity of the previous layer's vector endpoints. (Or by many other alternative methods, perhaps, but this one makes the most sense to me.)
Training this kind of neural network could involve modifying the magnitude of these vectors so they end closer (or farther away) from the next layer of nods, which would be analogous to modifying the weights of a traditional neural network. But you could also modify the angle of the vector or even slide nodes on one layer along their axis so they are closer or farther apart. You could also give the nodes other kind of activity choices like pulsing with a greater degree of force, thereby affecting the next layer of nodes more strongly and from a greater distance. This would allow one node to trigger some kind of explosive activity in the network during specific event sequences while acting normally the rest of the time.
What you would have is an actual cascading network of nodes, where the data cascades through the network in a much more complex and multi-faceted (but still ordered) way than it does in a traditional NN that just uses weights and thresholds to decide how data passes through a node. My attraction to this idea stems from two things:
1.) It would be easy (and extremely captivating) to graphically illustrate this kind of neural network and watch it in action.
2.) Because each node interacts with other nodes in a mathematically complex way and dynamic way, it would allow for smaller numbers of neurons to approximate much more complex functions.
Anyway, I came here to post this because Numsgil usually has interesting thoughts on ideas like this.