Look at the "possible improvements" section of the wiki article for the Wagner-Fischer algorithm (which computes the Levenshtein edit distance). Wiki article: http://en.wikipedia.org/wiki/Wagner%E2%80%93Fischer_algorithm
It says that we could potentially make substantial improvements to speed, depending on what we want. For example, if we have an upper bound on the distance, then we can substantially reduce the amount of time. Also, using lazy evaluation apparently gives a substantial speed improvement (details here, although I don't completely understand them: http://www.csse.monash.edu.au/~lloyd/tildeStrings/Alignment/92.IPL.html
On a somewhat unrelated note, if the DNA comes from robots of the same species (they are both mutated forms of the same original DNA) then we can use the mutations history of the robots to compare how related they are. For instance, if robot A has X mutations and robot B has Y mutations, but the first Z number of mutations are the same for both, the edit distance could be a maximum of (X+Y-2*Z). We would have to make some small adjustments so that "run" mutations would count as multiple edits.
If the robots are closely related, then we can use that upper bound to reduce the time needed to calculate edit distance.
Assuming I read the wiki correctly, here is an example. Let's say that robot A has 600 bp and 34 mutations (with run mutations counting as multiple mutations). Robot B has 590 bp and 36 mutations. The first 15 mutations are the same. Normally, the matrix would contain 600*590=354000 entries. Since we have the upper bound of (34+36-2*15)=40, we can then use the modified algorithm to only evaluate part of the matrix. In the end, we find that only 600*(2*40+1)=48600 entries need to be computed with the modification. It is an order-of-magnitude speedup.