Author Topic: Question about the Feasability of a Neural Network Set-up  (Read 3697 times)

Offline Elite

  • Bot Overlord
  • ****
  • Posts: 532
    • View Profile
Question about the Feasability of a Neural Network Set-up
« on: March 01, 2007, 11:14:27 AM »
Are any of the programmers here familiar with the workings of neural networks?

If so, have you any idea of how much difficulty would be involved in setting up this system:

Some of the neural network's outputs cause a change in the actual set-up of the network, so it can effectively re-write it's connection archetecture.

The network observes a human programmer making improvements to the network, and learns to make basic improvements to itself. A node is periodically added to the network, and the re-optimisation process begins again (whihc would of course aid subsequent attempts), with the programmer giving the network both an initial push and subsequent reinforcement.

It seems to me neural nets lend themselves to this kind of recursive self-improvement because of their simplicity, modularity, adaptivity and nonlinearity

Firstly, how easy/difficult would this kind of self-improving neural net set-up be to program?
Secondly, to you think a network could learn to improve itself in such a way?
« Last Edit: March 01, 2007, 11:20:27 AM by Elite »

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Question about the Feasability of a Neural Network Set-up
« Reply #1 on: March 01, 2007, 11:29:14 PM »
Neural nets actually include alot of similar but fundamentally different areas.

Classic neural nets have "input" nodes that feed intermediary nodes which feed output nodes.  That is, there's a clear flow of information.  These tend to be brittle, training heavy, constructs of limited real world use (primarily useful in areas requiring fuzzy decision making).  They're very susceptible to over learning, too.  I believe these are called "feedforward".

What's caught my interest lately is attempts to delineate neural nets.  Instead of inputs moving cleanly from input to output, have the neurons echo signals back, too.  A large web with some input and output nodes ingrained into it, without any clear beginning or ending.  There was an article but I don't remember where I saw it.

Towards your idea, I would point out the following difficulties:

1.  Adding nodes does not mean your neural network is better.  Too many nodes leads to more ingrained over learning, I believe.  Over learning is a serious issue with neural nets.  Imagine using a quadratic function to mimic a linear one.  Or worse, a n^26 polynomial, where trivial terms (0 coefficient) aren't allowed.  In general, neural nets have an "optimal" size for a given problem set, and degrade quickly as you depart from that size.  You don't always know what that optimal size is unless you know alot about the solution, which kind of defeats the purpose.

2.  I believe you're making a humunculous error here.  How does the neural net learn to improve itself?  That is, what does the neural net change?  Classically neural nets "train" themselves by modifying their nodes' weights.  But this doesn't make sense from the point of view of adding new nodes, or "observing" a human modifying the network.

My old CS professor's website has an excellent introduction to the theory of neural networks.

Offline Elite

  • Bot Overlord
  • ****
  • Posts: 532
    • View Profile
Question about the Feasability of a Neural Network Set-up
« Reply #2 on: March 02, 2007, 02:44:28 AM »
Ah, OK, thanks Num