Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Griz

Pages: [1] 2 3 ... 40
Bugs and fixes / DarwinBot crash (ver 2.44)
« on: December 13, 2008, 07:15:49 PM »
Quote from: Numsgil
I found the last changelog.  Was hiding in announcements.  Anyway, they're all pinned.

I have no idea what that means ...
or where to find them.
can you give me a clue?

Bugs and fixes / DarwinBot crash (ver 2.44)
« on: December 12, 2008, 05:18:28 PM »
ok ...
have searched everywhere I can think of.
if it exists ...
I would think it would be at the FTP just like ver h is:

doesn't appear to be.

so it goes.

Bugs and fixes / DarwinBot crash (ver 2.44)
« on: December 12, 2008, 04:51:30 PM »
Quote from: Numsgil
They're versions.  If you search the bugs and fixes forum you can find Eric's changelog where he kept track of what he changed between versions.

good luck!

I found a message thread on
but so far not for L.
seems M had some problems and L ended up being the last.
as I recall, eric often included an upload of the new changes ...
ie ... a,b,c .... L, M ... compiled ...
but not the source code in his update files.

well ...
I'll keep looking ...
if anyone discovers it ...
please let me know.
will do the same.

Bugs and fixes / DarwinBot crash (ver 2.44)
« on: December 11, 2008, 10:47:51 AM »
Quote from: Endy
Darwin2.43.1L works good, I think the species forking stuff is the main issue with the newer versions.
yes, I have that ...
what I don't have is the source code for 2.43.1L ...
and I don't see it on the wiki or download site.
the last source I see there is Darwinsource2.43.1h ...
unless eric shared it in one of his update threads ...
which I haven't yet found.

Bugs and fixes / DarwinBot crash (ver 2.44)
« on: December 10, 2008, 10:47:28 AM »
Quote from: Numsgil
Hey, yeah, long time no see.

Did you check out the pic I'm using on the wiki

off topic???
[hey endy]
somewhere in this thread ...
people were looking for the sourcecode for 2.44.
I just added the link after the last message.

yes ...
I did see the new pic on the wiki.
I have moved and changed my address ...
so maybe that's why I haven't been
receiving my royalty checks.

well ...
I'm just trying to catch up ...
downloaded the latest ... 2.44 and source code ...
then read this thread and find there were/are problems/bugs.
what is the last stable version that people are now using?

back to reading.

Bugs and fixes / DarwinBot crash (ver 2.44)
« on: December 09, 2008, 10:47:52 AM »
Quote from: Numsgil
The older version you have might not do viruses like you expect.  There's been some work with how viruses work relatively recently.

isn't the source code for 2.44 here:

Bugs and fixes / League Problems
« on: January 03, 2007, 10:17:46 PM »
Quote from: Numsgil
This is the difference in our opinions.  You view this is a problem, I do not.  It is mostly impossible to rank bots in a linear fashion anymore, because the program is getting to a rock-paper-scissors strategy.  It's okay if stronger bots get stuck in the bottom of the ladder.  Thems the breaks.
This is a bias, not an error.  Run the same initial ordering of bots 10 times.  If there's any deviation in the placings, that's error.  There should be very little error.  If you run 10 different initial orderings and get 10 different results, that's bias.

Bias is inherant in all ladders.  But that's okay, because we have a fair way of ordering the initial robots.  It's okay if a strong bot is stuck at position #25.  Bots at the top of the list should not only be strong, but capable of defefating some of the tricky bots (I believe there was some Umbra bot that stops alot of bots from proceeding in the leagues).

I would use either bots' ages or present ranking in the league to setup the initial ranking.  Ages would be best, since most older bots rank low in the league.


If you want a fair way of running the leagues, what about something like this:
take some bots, randomly order them into an initial order.  Run the league.  Reverse the new order of the bots (so 1st place becomes last place, etc.) and rerun the league.  Keep doing this until the rankings stop fluctuating every time you run the leagues.  I'm guessing they'll never stop totally fluctuating, but you'll definately have the case that bots at the top of the league are strong, which is what you want.
There's no such thing as a statistical draw when you can run an arbitrary number of rounds.  Supposing for a moment that you managed to find 2 bots that are identically matched (exactly 50/50), there isn't a test for that.  So maybe we should add one.  But I warn the number of matches you need to run to determine a true statistical draw is probably in the thousands.

Try this simple experiment.  Hack into the league code, and set up a league match with a "fair" coin (assign a random winner based on a 50/50 probability).  Run the league.    It should be an enlightening experience either way.

I imagine a league winner will eventually be declared, which, come to think of it, isn't good.  We should add a catch for when the results are indicative of a true statistical draw.  But again, we're talking possibly thousands of rounds.

Just declaring a winner after 200 rounds based on who has the most isn't proper.  Imagine flipping a coin 200 times.  It's not going to end up 100/100.  Would you declare the coin unfair if it was 130/70?  That's where stats comes in.  Stats assures us that we have control over arbitrarily picking winners.

you're talking theory ...
and I'm talking practical application.
but you already know it all  .......... as always so ...
fuck it.

Bugs and fixes / League Problems
« on: January 03, 2007, 10:14:55 PM »
Quote from: Numsgil
n(n+1) / 2 approaches n^2 as n gets very large (actually, it approaches 1/2 n^2, but big O ignores those coefficients), which is the point I was making.  You're going to be running alot of rounds, no matter how you slice it.  Suppose you run 1000 bots in a round robin tournament.  That's going to be something on the order of half a million matches.  That's a lot of matches.
we aren't talking 1000 bots ... but 30 ... 435 matches.

It's okay that initial order matters in the ranking, because we have a prechosen method for the preranking: age.  Initial ranking is always going to matter, there's no getting around that in a ladder.  But using seniority for the initial ranking makes the most sense, for reasons Jez outlined above.  In your own personal ladder, feel free to chose whatever initial ordering you like.  Initial ordering does matter, but so long as your choice is arbitrary most of the league is going to be ordered correctly.
I'm sorry Nums ... is is not.  
you are missing what I am talking about ...
as you already think you know what I'm saying, and you don't ...
so there's no room left there for you to take a look at what
I'm pointing at.
you're not only not on the same page ...
but not even in the same book.
ok. I'm tired of beating my head against the wall.
forgettabout it then.

The idea isn't that the ladder is a perfect representation of the strength of the bots.  The ladder is simply a quick way to relatively sort them.  That's why they're used in sports, people aren't patient.

I'll post in suggestions forum with my stats findings.

Suggestions / League stats tests
« on: January 03, 2007, 10:03:32 PM »
Quote from: EricL
I'm going to stay out of the statistics debate, having built few combat bots myself, other to say that I personally have no issues with the fact that a bot may have to employ multiple strategies to defeat multiple bots ahead of it on the ladder - strategies which the higher ranked bots perhaps never required given their earlier genesis.

One thing I have noticed however that can greatly impact the results of a contest are inequities in the random layout of veggies and contestants.  Since there are so few starting bots, thier starting positions relative to each other and to the veggies in the sim can have a huge impact on their ability to utilize veggy energy to gain nrg or numbers and thus how well they perform in the contest.  One bot may actually be demonstably better than the other, but the margin may be slim enough that a bad start will inevitably cost it the round.  It is not statisticly improbable to get 5 bad starts in a row, which can obviously lead to bogus rankings.
yes. I have noticed this as well ... part of my concern about actually having
better control over veggie re-population, esp max pop.
some bots go into sort of a holding pattern and don't move about much ...
and if the repopulation/reproduction of veggies doesn't happen to occur
near them, they eventually starve.
I forget which bots ... but whether or not a veggie appeared near them or not ...
was the deciding factor in whether they died out or went on to win.
I might suggest we hard code certain restrictions such as starting contestants in very specific locations on opposite sides of the field and exactly placing the initial veggies equa-distant from the competitors, therby giving neither contestant a positional advantage at the beginning of a round.
yes. that's something to think about ...
otherwise the 'random' element can easily eclipse our best efforts at
making the rounds statistically valid.
that random' thing can be a very large elephant loose in the room.
I wondered before if we shouldn't also use a given 'seed' when running leagues ...
as an effort to get as much repeatability into it as possible.

btw Eric ...

I've been noticing large numbers of certain bots and veggies that are tied
together ... suddenly 'leaping' in mass from one place to another ...
a big gob of them say in the lower half of the screen ...
suddenly being transported, as a whole organism, to the top ...
or similarly from right to left. sometimes they go back and forth
a number of times.
it is as if one of the tied bots strays below the bottom of the screen ...
and then brings the whole gob with it when it is relocated [wrapped]
to the top, rather than just that bot being wrapped. must have something
to do with ties.

Suggestions / League stats tests
« on: January 03, 2007, 09:47:49 PM »
Quote from: Numsgil
Griz, I'll do some figuring on how to determine if a round is truly a statistical draw.  You're right, eventually it should be possible to say that something really is a draw, that the bots are indistinguishable from truely 50/50.
well see shvarz's and my post above ...
and more below.
Also, League matches are unbiased, but the way in which league matches are organized and interpreted may not be.  But that's really a different issue, isn't it
well yes ...
but it can, and does, indeed affect the ranking ...
certainly when doing a league rerun or initial setup ...
and it can still stop a bot from advancing due to just one other bot having his number.
we don't need to do it that way.
however, I also must admit ...  so what?
what are we really using these rankings to determine anyway?
it has little to do with DB as a sim ...
and more to do with Bot Designing.
shvarz, I noticed after doing all this work what you mention.  This is sort of fudging the proper way of doing things.  The problem is that running it in a more scientific manner would probably require more rounds than we're willing to do.
well, it may actually reduce them quite a bit.
we can still go with it set up as is ...
do an initial 5 rounds ... I mean most matches are pretty lopsided ...
one bot winning all 5 ... so no problem there.
and for those matches that need to be extended to determine
a winner, we can leave that alone as well ...
but as I have been trying to suggest all along ...
we might have to put a cap max number of rounds.
just as an example ... call it 40 rounds. with a Z of two ...
a bot would have to win 27 of the 40 rounds to be called
a winner with 95% confidence, yes?  or 24 with a Z of one.
so the rounds do get extended, just as they do now ...
but upon reaching 40 rounds ... stop. halt. enough.
either call it a draw or give it to the dude with the
most rounds won, realizing our confidence is being
compromised somewhat ...
[it doesn't mean we are wrong] ... and move on.
that would eliminate all these ridiculously long matches ...
and I believe, actually reduce the time required to run
a league.

I'm open to suggestions for removing bias where possible.


now ...
out of all this ...
playing around with leagues and seeing what
makes them tick ...
I've learned a lot ...
and that's what it's all about anyway ...
imo, ime.

Suggestions / League stats tests
« on: January 03, 2007, 09:23:58 PM »
Quote from: shvarz
Jez (or Griz?), Nums's post exactly answers your questions.  It is all about "how accurate you want to be in calling a winner?"  If all you want to do is get a rough idea, then running 5 matches would be enough.  If you want to be able to detect tiniest differences in bots' fitness, then you need to run many-many matches.  The small the difference and the more certain you want to be that your result is correct - the more matches you'll have to run.
that's exactly right ...
but no, he didn't answer my questions ...
I had no questions about how statistics or these equations work.
the question was about the 'application' of these methods ...
which you go on to address:
Nums, one point of caution:  The way leagues are run now destroys all your calculations.  They are run based on principle "run until one of the bots is declared a winner or until the maximum number of matches is reached".  The first part "run until one of the bots is declared a winner" is really not the right way to do that.  In fact, it is so bad that in scientific circles it is equivalent to falsifying the data and will get your papers withdrawn and reputation ruined.  Statistics are simply not done on the principle "repeat experiment until you get the result".
right on again.
The calculations that you describe are based on the idea that you determine in advance how many repeats you are going to do. So, if you want to do it right, then decide on how accurate you want your measurement to be, then get the necessary number of matches and always run this number of matches for all bots you test.
right. so in the case of 100 rounds, and Z of 2 ...
a bot would have to win 60 of the 100 in able to
declare it the winner with 95% confidence.
now if it only won 55, you could still declare it the
winner, but not with that level of confidence.
your options then are to either use a lower Z ...
a Z of 1 allowing you to call 55 a winner ...
with a confidence level of 68% ...
or call it a draw.

Bugs and fixes / League Problems
« on: January 03, 2007, 04:19:50 PM »
can't seem to reply/post to Num's stat thread ...
so will put it here.
Moved to League stats tests ~~~Jez

Bugs and fixes / League Problems
« on: January 03, 2007, 03:18:04 PM »
Quote from: Jez
I’m not sure exactly what good work Eric has done for this last buddy drop, I’ve been running it under VB in case of crashes but it seems to run faster than before.
The current standing, which I think has been affected by the latest changes, is as follows:
    [2]Destinatus P
    [3]The One
    [4]Dominator I
    [5]Darth S
    [6]Spanish C (103 wins)
    [7]Animal S (122 wins)
    [8]Carnatus Orbis
    [10]Una 3.0[/li]
well ... this will be drastically altered!
Griz, I am not questioning your mathematical abilities, you far exceed my abilities in that, or probably any other, field.
It is the premise that the start order is arbitary that I disagree with, this isn’t a reformation of the league from scratch, would need to run all existing bots if that were the case; it is a shake up of the existing placement of the bots in the league, something that is far from arbitary.
understood. if that were indeed the case.
I don't think in this case, it is.
an example:
the list you provide above going to be anywhere
near what the ranking will end up being.
Din is going to drop to around #5 ...
Dominator Invincibalis to around 8th ...
Carnatus Orbis out of the top 10 ...
Kyushu down to around 20 ... etc.
and this isn't taking into consideration
that there are other bots farther down
the line that will still come up and take
them down a bit further.
so while this may have been a valid ranking at one time ...
from the days of 2.37???? or did someone rerun leagues
since 2.4X? ...
I don't know that it still is.
well ...
having run a bunch of contests ... I know it isn't.

anyway ... bots behave much differently now, eh?
the physics of the program have been altered ...
and costs and lots of stuff ...
so you are going to run into the very problem I've been
attempting to draw attention to ...
I've already run into them ... and showed you one example
being that of D Scarab 3 being prevented from moving up.
[which would have happened to him in a challenge as well, btw]

in fact, even as we speak, I have him challenging the 6 bot
mini league ranked just ahead of him ... #7-#12
and so far he as taken out both  Carnatus Orbis & Duplo Simpleboticus  
and is working now on James 4 ...
8 rankings above were the league stuck him down at #18.
so he's still on his way up.
The comment I made about a RR being ‘a manually intensive, long drawn out way to decide the results’ is based on the fact that there is no way for the program to do it automatically at the moment and that AFAIK a RR where every bot fights every other bot is undoubtedly going to take longer than a challenge match where bots don’t always compete against all the other bots.
right. having to manually doing this is a big pain.
that's why I'm suggesting we consider making some changes.

as far as challenge matches go ...
that is a slightly different animal ... but I have similar problems with it.

as far as rerunning/establishing a league ... as it is now ...
it won't usually take as long ... but there is no guarantee of that.
it is possible they would have to do as many .... possible, if not probable.
however, just as probable as it only doing the minimum number.
I would expect they would 'average' about half ... probability being what it is.
so you do a trade off ...
trading speed for accuracy ...
it's a balancing act ...
which I don't have a problem with, you do what you have to ...
but lets not pretend it is then as correct as it could be ...
or can justify using all that time for statistical calculation
re  # of rounds required to win ...
because the error introduced in the initial placement ...
is far greater than that, and will nullify all those great
efforts at being precise.
hmmmmmm ....
looking for another analogy.
ok ... like spending hours/days carefully stacking up 100,000 dominoes to
make a really cool display when you finally stand back and tip that first one ...
all that time there being a great dane running around in the room.
I had no intention of ‘shooting the messenger’ you are undoubtedly correct in what you say, I am arguing over the premise that the starting order is arbitary.
I don’t see why, if the starting order is considered valid and the method of competition (challenge) is valid then the results don’t hold some statistical relevance.
right. and I am saying they, and your premise, are not valid.
so I guess we can agree we disagree.

(contest results are now Spanish C 110 – Animal S 130)

let me know what they are up to in another week.  lol
see ... here we are at the place I don't get ...
it is a statistical draw!!!!
why keep screwing around with it?
why not give it to Animal S ... he's got more rounds ...
and move on.
if you are concerned about the time to run leagues ...
I suggest you take a look at what is consuming much
of that time ... and determine if it is worth it or not.
I don't happen to think it is ... but ...

hey ...
made a mistake before for # of matches required
for all bots to go up against each other  ...
had a + there instead of a -
it isn't n(n+1)/2  but n(n-1)/2
3 bots = 3 matches: ie  A-B, A-C, B-C
6 bots = 15
8 bots = 28
10 bots = 45
30 bots = 435  

I'm tellin' ya ... those 8 bot sub-leagues are looking
better to me all the time.

Bugs and fixes / League Problems
« on: January 03, 2007, 12:13:09 PM »
Quote from: Jez
it would be a manually intensive, long drawn out way to decide the results.

not really ...
regardless of the what method you use to
run a league, a 30 bot league is going to take you
one hell of a long time ... IF you ever even get thru
it without a crash somewhere along the way.
[let me know if/when you ever get it completed]  lol

and as I've pointed out ...
for each bot to meet every other ...
it's n(n+1)/2 ... not n^2
so ...
30 bots = 465 matches
20 bots = 210
10 bots = 55
8 bots = 36
and I still think we would be much better off making
4 leagues of 8 bots, 32 all told ...
then allowing the top one or two in each ...
to challenge the league ranked above them.
a new bot could start off challenging the lowest league ...
and be allowed to move to others if he is good enough
to 'make the cut'
compiling the rankings in this way ...
would not only be more accurate ...
but would actually make the project manageable.
I do understand the point you are making but in the context of challenge matches the results will be both accurate and fair.
well ... close perhaps, but I don't see how you can say accurate.
of course one could always take a given bot that has been 'stopped' ...
and put it up against a higher ranked bot outside of leagues just
to see if it was a fluke, and if so, present the results and request
a rematch or re-evaluation.

but ... whatever.

Act as you will.
Go on as you feel.
This is the incomparable way.

I've got all those pesky real world things
that better deserve my attention anyway.

good luck.

Bugs and fixes / League Problems
« on: January 03, 2007, 11:51:53 AM »
Quote from: Numsgil
About the statistical significance of the leagues, each match is a statistical test with n trials that attempts to test the null hypothesis that two bots are evenly matched.  The number of trials n increases until that null hypothesis is rejected.

A contendor is given the title of victor in a league match if it wins 1/2 n + sqrt(n) rounded up rounds.  It should be easy to see that eventually, this will reduce to: whichever bots wins the majority of rounds.  This happens when n approaches infinity (because the big O of 1/2n + sqrt(n) = big O of n).

Now, I don't know what sort of confidence interval this is using.  The thing with statistics, is that when you reject the null hypothesis you're never sure if you're not making an error.  Usually a confidence interval of 95% is used, which means that you'll be wrong when rejecting the null hypothesis 5% of the time.  Which means that when you run the leagues over, you might get a different result, because there's still that 5% error.  Or 1%, or .001 %, or whatever the confidence interval happens to be.  I'm going over my stats notes now to find out what confidence interval we're using.

The final league standings don't represent what bot is the "best".  It should be easy to see that the only possible ranking in a round robin tournament is by groups.  Ie: 0 losses, 1 losses, etc.  What a ladder represents, rather, is a somewhat arbitrary, but <I>fast</I> way of ranking contendors.  In general, the relative rankings represent relative strength.  But there are exceptions.  Run any real world ladder twice and you'll get 2 different results.

If we want to rank bots based on their absolute strength, we would need to have n^2 matches, where n is the number of bots.  We would need to use a chi squared test to rank them.
not so ...
you only need n(n+1)/2 ...
which in the case of 30 bots, is 465.
btw, leagues of 10 is only 55, a reasonable number
to run without taking days to do so.

and the chi-sqr test wouldn't rank them? ...
it's purpose is to tell you if your results fall withing a
range that is acceptable, that you can be confidant in.
please tell me exactly what data you would be using to
run this chi-sqr test in this case?

and once again ...
people, please hear what I am saying ...
I don't care how precise you think you are being in
calculating how many rounds it take to find a statistically
valid winner of a given match ...
when the 'arbitrary' initial order that you start the bots with ...
upon attempting to establish league standings ...
will have a much greater affect on their ranking than does
all your playing with numbers ...
unless every bot is not allowed to go up against every other.
that is all I am saying.

now you can dance around that all day long ...
it won't change a thing.
don't get so caught up in the details that you miss
the bigger picture here.

and don't shoot the messenger ...
just 'cause you don't like the message.

do it however you want ....
but please ...
don't pretend it's statistically valid as it is now.
it isn't.

Pages: [1] 2 3 ... 40