Neural nets actually include alot of similar but fundamentally different areas.
Classic neural nets have "input" nodes that feed intermediary nodes which feed output nodes. That is, there's a clear flow of information. These tend to be brittle, training heavy, constructs of limited real world use (primarily useful in areas requiring fuzzy decision making). They're very susceptible to over learning, too. I believe these are called "feedforward".
What's caught my interest lately is attempts to delineate neural nets. Instead of inputs moving cleanly from input to output, have the neurons echo signals back, too. A large web with some input and output nodes ingrained into it, without any clear beginning or ending. There was an article but I don't remember where I saw it.
Towards your idea, I would point out the following difficulties:
1. Adding nodes does not mean your neural network is better. Too many nodes leads to more ingrained over learning, I believe. Over learning is a serious issue with neural nets. Imagine using a quadratic function to mimic a linear one. Or worse, a n^26 polynomial, where trivial terms (0 coefficient) aren't allowed. In general, neural nets have an "optimal" size for a given problem set, and degrade quickly as you depart from that size. You don't always know what that optimal size is unless you know alot about the solution, which kind of defeats the purpose.
2. I believe you're making a humunculous error here.
How does the neural net learn to improve itself? That is, what does the neural net change? Classically neural nets "train" themselves by modifying their nodes' weights. But this doesn't make sense from the point of view of adding new nodes, or "observing" a human modifying the network.
My old CS professor's website has an excellent introduction to the theory of neural networks.