25 Feb, 2010, Silenus wrote in the 1st comment:
Votes: 0
Hi guys,

I am curious about the specific procedures one uses to assess how good your performance is for applications written in C/C++. Is the best way to do this for interactive servers to "record" a series of actions (or write some robots which simulate this ala DS) and replay it as you performance tune the system? Obviously the record thing is reproducible (same set of events) whereas robots may not be. Would you also use your standard profiling stuff to get a sense of where the bottlenecks are when doing this?

Thanks,
25 Feb, 2010, David Haley wrote in the 2nd comment:
Votes: 0
Having a bunch of agents doing a bunch of random things is ok but obviously won't get you full coverage. You can start by turning on the profiler and wandering around yourself doing some stuff; if some of your functions are unreasonably slow you will find out pretty rapidly as you examine the results. That said, I think it's much easier to get realistic performance measurements by having a play-test phase where you can examine performance while people are doing things they'd actually do while playing the game.

Premature optimization is the root of all evils and all that.
26 Feb, 2010, Silenus wrote in the 3rd comment:
Votes: 0
Thanks David. In this case I will be more comparing an optimizer's potential impact versus "normal" unoptimized code.
26 Feb, 2010, David Haley wrote in the 4th comment:
Votes: 0
Well, it's always difficult to know how to compare two things when you're not sure what those things are. Are you trying to test the interactive part of the code or the non-interactive bits? You can always write your own test function that runs through a series of actions, running the same functions over and over again, and this gives you a very clear benchmark to run against the optimized and unoptimized code. If you're trying to draw a comparison, you do need to be consistent in each trial run of course.
27 Feb, 2010, Silenus wrote in the 5th comment:
Votes: 0
My idea at present is to try to create a crude assessment (using some crude statistics) of how much I can save (mostly in terms of potential instruction counts) between two versions on different bits of source. The problem is I cannot create some sort of weighed sum of these quantities (i.e. know in essence how much time is spent where).

I might after some discussion use some other approach (for the optimizer). Originally I was thinking of modifying the code generator but perhaps I am thinking now that I could optimize the bytecode generated streams instead which would be far easier to implement.
0.0/5