Author Topic: Release date  (Read 15975 times)

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Release date
« Reply #30 on: May 03, 2008, 03:46:46 AM »
While switching between .net languages is so easy it's almost trivial, moving from VB6 (the current Darwinbots code base) to .Net is decidedly not.

Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Release date
« Reply #31 on: May 03, 2008, 07:41:33 AM »
1.  Make it work.
   2. Make it right.
   3. Make it fast.

Heh that's actualy how I end up working a lot of the time... even for seperate small features (But with less emphasis on making it fast)...
The thing is you often know if a feature will need to be expanded a lot and may be run very often, so building this with speed and OO in mind can help a lot...
You don't need to make everything perfect in every sence, but you can predict uses for the system and make sure your structure supports it and is easy to expand, and you can run some early performance tests and generaly keep performance in mind while building your feature... not pass any strings and such... try to stick to pointers and such.
Also if you're going to store something and you know it will happen very often then you may want to consider how you're going to decrease the file size before building everything....

I agree there's no need to optimize for something that may never be needed, but I also think it's important to think ahead or you'll spend too much time refactoring code... Sometimes making something work right will barely be related to just making it work, hence that step should be skipped IMO, and sometimes performance is a great issue and therefor you need to remake virtualy everything if you didn't have it in mind from the start...
So I agree that for a lot of features you can proceed this way, but there's also a lot of cases where you need to think things through before getting started. (IF you know that structure or performance can be an issue).

But where game engines are concerned... I don't think they're optimizing prematurely... I think they need to optimize anything they can... GTA 4 still has lagg at times, and so does virtualy any game that didn't cut away anything that could slow them down with extreme prejudice (Like Blizzard who only accept perfect features, so IMO they often end up cutting away a lot of the fun, atleast in WoW they did, all that was left was a MMO single player game where you could brag to your friends about the gear you managed to grind, and if your friends weren't on your served... then it was actualy no different than a single player game, except monsters took ages to die) (Anyway that was offtopic, I'm just not a big fan of MMORPG's where they cut out the R to favor booring things like grinding and quests like "Bring this lettter to someone you never heard about, and may never hear about again" or kill X boars of Y color... worst asset reuse I've ever seen, I was killing boars at lvl 1... and several times at later levels, and then the expansion came... and the first thing I killed in the new world... was a board... and it took me longer to kill it than when I was lvl 1... so by that logic my character spendt WAY to many hours getting worse at killing boars... lamest game EVER... wouldn't play it again if I got payed by the hour... well...)

Anyway just saying, game engines can always use better performance, and WoW is the worst wana be MMORPG I've ever played, less fun than pong... (To be fair I did have some fun with PvP, one thing Blizzard knows how to do is balance PvP, but you get tired of the 3 maps they had pretty fast.)

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Release date
« Reply #32 on: May 03, 2008, 02:22:15 PM »
Ah, see, you fall in to this mental trap of "the game needs to be fast, so I need to write fast code."  But the thing is, you only need to worry about speed like this if and only if:

1.  Your game is CPU bound (I'll be you $100 that GTA is fill rate bound, ie: it's limited by the number of pixels it can draw.), and
2.  The section of code you're working on is a bottleneck in the code.  Note, however, that you can only determine this last one if you actually profile the code.  If you can't say exactly, with a number, how slow the code you're working on is, then it's premature optimization.  Meaning you do not write the code to be fast up front.  You write it to be readable and understandable 10 months from now when you go back and refactor it.

All this said, choosing the right algorithm is of primary importance, and cuts way deeper than optimization.  If you write an O(n^2) algorithm, then you're screwed, right from the beginning, because the scope of that particular algorithm has an inherent limit.  If you bubble sort a list of 5 elements, you might feel justified because n is so small, and writing a nlogn sorter might be "overkill", but then that's where the stl and 3rd party libraries come in.

So my priority list when I code (and get my say) looks like this:

1.  Choose algorithms with low Big O
2.  Write tests to validate the results of my algorithm, and check for pathological errors.
3.  Write code to fulfill those tests by any means necessary.
4.  Refactor the code I just wrote to look pretty.
5.  Profile for performance issues if necessary.  Ignore performance issues if not necessary.

So as an end result I have a lot of code that is easy to follow, with good algorithms, and tests to ensure they work correctly, but that is written in an inefficient manner.  And at the end of the day I might refactor those inefficiencies or I might not.  It just depends on whether it's necessary at the end of the day.

Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Release date
« Reply #33 on: May 03, 2008, 05:29:08 PM »
The only time I saw GTA lagg was when a large amount of NPC's got hit and went in rag doll mode... so it would seem like it was just taking too long to process the physics and collisions of all the ragdolls....
But I think a lot of the time, if you're going to build on something you never know exactly how far you're going to take it, so when building a base it feels safer to optimize performace right away so you don't risk having to make a change later on that will affect all the expansions or inherited... stuff... (I'm feeling very intelctual right now)
I kind of had the impression that lighting and physics where the big bottlenecks though... and I don't think most people have a physics card so I would imagine that code optimization should help, even if it's not the main issue.
Anyway... I'm just an intern... what do I know...

Offline Commander Keen

  • Bot Builder
  • **
  • Posts: 91
    • View Profile
Release date
« Reply #34 on: May 03, 2008, 08:08:35 PM »
Readability is more important than optimisation. I've got lots of unfinished QB programs which will never be completed because I can't understand my own code  
My VB6 programs have a much higher success rate, even if that may be because I learnt to pro

At the end of the day, PC's are getting fast enough that you don't need to optimise everything. When I used to program on my 386 laptop in QB, I found optimisation vital, and I wrote several test programs that demonstrated this. Now, with computers capable of calculating over 1,000,000 times faster than before, we don't need so much optimisation, making the use of interpreted languages like Java and Perl useable for serious applications.

Ok, that's my raving done. No guarantees for reliability, because I am an amateur programmer, and most of what I know is self-taught.

Offline bacillus

  • Bot Overlord
  • ****
  • Posts: 907
    • View Profile
Release date
« Reply #35 on: May 04, 2008, 02:20:16 AM »
I had a look today at the DB3 code. I was struck by the simplicity of the DNA parser, I expected a huge chunk of unreadable code, not a huge chunk of readable one.  I still haven't figured out how to commit anything, not that I have time with exams so close and the F1 league to chip away at. Although I would make the suggestion to move the whole thing to Google Code if it's not too cumbersome.
"They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown."
- Carl Sagan

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Release date
« Reply #36 on: May 04, 2008, 04:18:44 AM »
I try my best   I'm rather happy with the DNA code atm, though it still needs some features to be implemented, and I skipped writing tests for the top level code.  And I sort of went overboard with exceptions in the parsing code...

To commit you'd need SVN access.  anonymous access is read only.  I can set up an account if you're really interested.  Same goes for anyone.  It's all version controlled, so if you screw things up I'll just revert it

I've heard that google code isn't quite up to snuff just yet.  Or that's what my friend at work said.  I might look in to it and see if it would work better than my private SVN for this use.

Offline bacillus

  • Bot Overlord
  • ****
  • Posts: 907
    • View Profile
Release date
« Reply #37 on: May 05, 2008, 02:56:27 AM »
Google Code is probably so good because it's public; no-one can sabotage your work permanently unless they were an evil hypnotist with way too much spare time, and you sometimes end up drawing out some brilliant ideas from some random guy who drops in a gem of code.
"They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown."
- Carl Sagan

Offline Trafalgar

  • Bot Destroyer
  • ***
  • Posts: 122
    • View Profile
Release date
« Reply #38 on: May 06, 2008, 01:14:52 AM »
Quote from: Numsgil
What I like about C# are things like run time reflection (ie: easily determining the type of an object at run time.

Side note: run time reflection has poor performance. Delegates also have poor performance. But, yeah, don't worry about that much.

There are cases where you can write code which is faster without being less readable, and if you know about those you can take them into account when designing something. For example, if you're planning something which you would normally do (numFoos % maxFoos) (modulus), you can swap the slow modulus operation for an extremely fast & operation if numFoos is always positive and maxFoos is (2^n), where n is any positive number. You would & it by 1 less than you would have %'d it by. (numFoos % 4) would become (numFoos & 3), for example.

E.G. Here's an example in javascript (I just copied this right out of what I'm working on at the moment).
Code: [Select]
var blankSubEntry = (lastWritten[(lastWrittenStart+matches)&maxMatches]==="");
if (hash == lastWritten[(lastWrittenStart+matches)&maxMatches] || blankSubEntry) {

Now normally you might say "well I should store (lastWrittenStart+matches)&maxMatches into a variable and then refer to that" - and that's fine. But hopefully the compiler should be smart enough (but not necessarily with javascript...) to optimize those so that'll only be calculated once anyways. Even if it doesn't, and you do that 5 times, it should still be faster than doing it with % once. (% takes 30-40 clock cycles, IIRC, and & takes about 0.5, in a pentium 4*)

* = I'm oversimplifying how that works. It's actually more complicated than that, and depends on what other operations are being done at the time, and what's going to be done after it, and I don't have a clue how it has changed after the pentium 4 or with AMD CPUs, etc.

So, things I usually do:
1. Prefer & instead of %, and choose power-of-2 sizes for things which will be using it (otherwise we can't use &).
2. For distance checks, omit the sqrt, and premultiply whatever we're going to be comparing against. You can calculate xdist*xdist + ydist*ydist and compare it to somedist*somedist to determine if the distance is <, =, or > than somedist. You can't determine how far apart they are, though. (Sqrt is slow)
3. If doing foo squared, replace with foo*foo (an optimizing compiler might already do this)
4. If doing foo * 2, replace with foo+foo (an optimizing compiler might already do this). Since multiplication (30ish cycles?) is much slower than addition (.5 cycles), you could even do foo+foo+foo+foo if you really wanted to, except that foo << 2 would also work there and would probably be faster.
5. When designing things that will be multiplied or divided by a number, prefer designing them so that you can use power-of-two multipliers/divisors. The compiler might be able to optimize multiplication/division by a constant which is a power of 2 to use << and >> instead, which is faster than multiplication, or you can do it yourself.

I tend to code in an already-optimized-but-readable style, but leave the really heavy optimizations to the compiler (especially with c#). Most of what I do are things the compiler can't or wouldn't be likely to - e.g. it can't change the number of terrain types from 7 to 8 and change the % to & and such without breaking your program, but you can design it better instead.

(In some languages I tend to optimize less. For instance, javascript for firefox extensions. I once tried to determine what was the bottleneck in an extension of mine which ran rather slow on many webpages, only to track almost all of the CPU use into a part of firefox's javascript math crap that I couldn't do anything about. That is, the extension checks the HTML and CSS for a page and reverses colors to change dark-on-light pages to light-on-dark pages. The problem is that for some reason firefox's JS math was terribly slow, and the string to int stuff and vice versa too (which were pretty much required for the extension to work and be useful).)
« Last Edit: May 06, 2008, 01:16:14 AM by Trafalgar »

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Release date
« Reply #39 on: May 06, 2008, 02:17:54 AM »
Quote from: Trafalgar
1. Prefer & instead of %, and choose power-of-2 sizes for things which will be using it (otherwise we can't use &).

That's specifically what I'm avoiding.  You see, it just doesn't matter if there's a 100x faster way of doing something if that function only gets called 3 times a second every other Tuesday.  Code for people and algorithms 99% of the time.  Then profile, identify hot spots, and refactor/optimize them away.  You might know how your code works, but it's going to be Greek to the guy that maintains it after you.

Once you crack out the profiler and are able to say something like "Function FooBar is taking 5ms per call.  Reduce that down to 2ms per call", you can iteratively refactor your code just for that function to achieve the desired performance.

Offline Moonfisher

  • Bot Overlord
  • ****
  • Posts: 592
    • View Profile
Release date
« Reply #40 on: May 06, 2008, 10:22:02 AM »
That's a nice trick, I'll have to remember that when working in native code.
But I would definately write a comment with the "normal" way of doing it and probably mention the conditions for the values used.
Probably make a function that used that method when the parameters where met and otherwise the normal way.

I know you don't need to optimize everything to an extreme degree, but if you have a feeling the feature you're working on will be used frequently you may aswell optimize it right away. And often optimized code isn't much harder to read, you don't have to go completely nuts writing everything in assembly language, just managing memmory and not passing strings and such will make a significant diffrence and would barely make the code any harder to read... IMO pointers aren't that bad, just takes a litle getting used to with all the referencing stuff.
Ofcourse if a function is rarely used it's not going to make a significant difference, although if you have a lot of heavy functions they can add up.
But so far it seems to me that a lot of the features made in a game engine will get called very often when they're in use. Or if it's triggered by an event of some sort there's a risk it can happen too many times at once. I can immagine people getting carried away, but I guess any optimization will help if you're already straining the processor with lighting and physics and such... so it may actualy make a noticable difference in the long run, I'm not sure how far they take it, but I know some of the native code is way beyond my understanding and I don't think they did it for fun  (But I would imagine that when you want to sell a game engine it's good to optimize almost everything since you don't know how often people are going to use certain features.)

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Release date
« Reply #41 on: May 06, 2008, 01:56:32 PM »
Yes, there are certain features that you know up front need to be optimized, because you have prior domain knowledge (you worked on an engine before and this part was slow).  Global knowledge helps too (this function is inside a tight loop that gets called 1000 times every game cycle).  But even with these cases, if you write the code to be fast first, you're making a mistake.  The problem is that, trivial optimizations aside, you don't know for sure what's going to be fast and what's going to be slow.  Modern computers have all sorts of caching issues, math coprocessors, etc., that change from year to year and change what sort of optimizations you need to make for your hardware.  Optimizations always assume something about the hardware your using (it has SSE, or it uses bits, that sort of thing).

As an anecdote, there's a popular sqrt approximation that Quake used involving bit manipulations and some adds and multiplies.  Sqrt calls were a bottleneck for the Darwinbots Visual Basic code.  I naively replaced some sqrts with these approximations, and just assumed it would be faster.  I believe it was Sprotiel who actually profiled and showed me that my Quake approximation was actually slower than a native call to sqrt.  I don't know exactly why.  Probably modern processors have a hardware level sqrt function that just beats any software approximation I could write.

Moral of the story here is even if you think you know exactly how and where to optimize, use a profiler anyway.  At no point should you make a change for the sake of optimization without profiling before and after.
« Last Edit: May 06, 2008, 01:58:00 PM by Numsgil »

Offline Trafalgar

  • Bot Destroyer
  • ***
  • Posts: 122
    • View Profile
Release date
« Reply #42 on: May 06, 2008, 05:46:08 PM »
By the way, what do you use for profiling in C#? (Last time I looked, I didn't find any free or open-source ones)

Profiling without an actual profiler is a bit of a pain. Running a loop which calls a function several thousand times, and timing how long it takes, doesn't necessarily give you results which reflect the actual real-world performance of the function, as far as I could tell.

Offline Numsgil

  • Administrator
  • Bot God
  • *****
  • Posts: 7742
    • View Profile
Release date
« Reply #43 on: May 06, 2008, 06:00:00 PM »
Performance hasn't been an issue with any C# apps I've done yet, but I think the professional edition of Visual Studio (which I have) has some in-built profiling.  A quick google search also led me to this page.

Offline bacillus

  • Bot Overlord
  • ****
  • Posts: 907
    • View Profile
Release date
« Reply #44 on: May 08, 2008, 02:53:54 AM »
Have you heard of a tool called Clover? It searches through the code and checks which code is not covered by test cases. This could be useful as there are an enormous amount of things to be tested, by the looks of it.
"They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown."
- Carl Sagan