Welcome To Darwinbots > Newbie

Hello, and opinions on robotics

(1/2) > >>

santacide:
Greetings!

Although this question is not strictly related to DB, I was hoping I could have your insight... I have been an enthusiast in BEAM robotics for a while (click here for details) but have started to shift away from analog circuits to microprocessor designs in order to simulate greater complexity. Although using a processor means I could just directly program in behavior, that somewhat defeats the purpose of making the robot biomorphic.

Basically the only requirements are that the design be biologically-inspired in some way and fits on an arduino-like microcontroller (atmel, picaxe, parallax propeller), though the ability to learn and give time-dependent output would be nice too. The sensors/inputs will not be very complex, being something along the lines of 2 microphones, 2 light sensors and 3 touch sensors, while the output would be movement direction, pan/tilt for the light/sound sensors, and a speaker. By time-dependent output I'm refering to the ability to give output that changes with time as well as sensor input, such as leg motor movement (where the leg must move up for say, half a second, move forward, move down, move back, and repeat).

I've been considering a lot of options, such as cellular automata, neural networks, genetic algorithms, etc... But I was hoping I could have your viewpoint as well. Thoughts?

Botsareus:
Well I am planning to use a mod of DB just for the situation described. But I worked out you must port it into 3D first and add long range vision and sound reception. At the core each bot behaves as a neuron. Can not give more then that at this time. I am still working out if I will need to pre program neurons for each area of the brain or just run zerobot algorithms. It all will depend on the efficiency of the algorithm.

Numsgil:
Robotics is a bit outside of my wheelhouse, but this is how I'd approach it:

1.  Decide what sort of behavior you want.  At the simplest, it could be how fast it can move forward.  The more (mathematically) precise you can state the objective the robot is trying to achieve the easier and faster it's going to be to get there.
2.  Decide what sort of inputs you have to your learning algorithm.  You have the inputs from the sensors, of course, but you could also store inputs from the recent past and feed those in.  At the very least, you almost certainly have some sense of time based on the update frequency of the various components, or at the least the clock rate of the microcontroller.  You can also do things like multiply one sensor's input by another's input to get a nonlinear third input.  The "magic" in neural networks is basically this sort of non linear behavior.
3.  Figure out a training regiment.  Are you simulating the robot in a virtual environment first and then uploading the resulting model to the robot?  Or do you want the robot to figure it out on its own?  (That's cooler, but will take a small eternity most likely, and is certainly more difficult).
4.  From the answers above, you can figure out an appropriate machine learning method.  Basically you want to find a "model" that will produce the right outputs given the right inputs, which is a machine learning problem.  Essentially you're producing something like a big (probably sparse) matrix that maps inputs (including the non linear inputs I mentioned) to outputs.

A genetic algorithm makes a lot of sense if you can "parameterize" your model.  So for instance, it's quite good at finding the right values for leg length, contraction force, contraction frequency, etc. if you were simulating a frog learning to jump.  I'm not a huge fan of neural networks, but they can be useful if you need to capture some nonlinear behavior (like do something in relation to the product of two other things).  (Most of what I don't like about them is that they obfuscate the underlying math behind a facade of being a "brain", when in point of fact biological brains are way more complex and sophisticated and doing way more interesting math).

...

My main piece of advice would be to abandon biologically-inspired learning algorithms, and see what else you have available to you.  At the end of the day, neural nets are just nonlinear best-fit curves in high dimensional space, genetic algorithms are just some sort of stochastic gradient ascent for an optimization problem, etc.  The underlying math is all that really matters, so I would explore things on that front.  Of course I'm pretty math heavy as a person.  If you don't feel as comfortable with linear algebra, you might not want to explore too far off the beaten path.  Just figure out what your inputs and outputs look like, and how you can measure success and failure, and look around the machine learning literature and see if you can find an algorithm that makes sense. 

But there's nothing wrong with starting with a genetic algorithm or a neural net if that inspires you :)

Botsareus:
Or a neural net where each neuron uses a genetic algorithm. :P

edit: Sorry, I tend to be much more grand. with my ideas then Numsgil.

santacide:
Hey Numsgil,

Thanks for the suggestions, those are actually great ideas. I've done some more planning/research and made the design a bit more concrete...

1: Reward/Pain is a result of the amount of battery charge remaining.

2: Inputs are the battery meter, 2 light and 2 tactile sensors. The charging station will be located under a light source, such that if the battery meter shows a low charge the robot will be able to seek out the light source.

3: I'm planning on having the robot learn in the "real world", not in a simulation, though like you said that would certainly be faster.

4: I'm still not sure of implementation, though I'm leaning towards a neural network just for biomimicry. I'm thinking I'll simulate the network in vhdl on a small cpld/fpga, with the inputs and outputs being pre- and post- processed by a microcontroller.

Note that many things about this will be "intelligently designed", such as the body, with the only purpose of the neural network being for behaviour. Similar to darwinbots.

I've also done some research on the C. Elegans neural connectome and found two things relevant to this:

1: The C. Elegans neural network has a fixed number of neurons and a fixed number of connections. However, it can learn by strengthening/weakening the pre-existing synapses.

2: There is a ridiculous amount of recursion. Sensory neurons will effect both motor neurons and interneurons, which will effect each other, and in many cases send signals back to the sensory neurons. This is probably how the worm can learn using only a fixed amount of neurons and connections.

So, I'm thinking of hard-coding a neural network in an fpga/cpld and having the robot learn by changing the neuronal weights. At this point I am unsure of how to automate the weight-changing though.

I guess my main question at this point is, how would you go about this?

Thanks again for the help.

Navigation

[0] Message Index

[#] Next page

Go to full version