11 Feb, 2007, Guest wrote in the 1st comment:
Votes: 0
I'm not sure exactly how to figure this out, who to ask, or where to begin looking for this kind of problem. But here goes.

I recently upgraded both of my servers to dual core AMD64 CPUs. Obviously I am thrilled with the new speed and such. But it seems to have had an interesting side affect I wasn't counting on.

In the game_loop handlers that deal with idling descriptors, my code is just like any other Merc derivative. It adds one to the idle counter with each pulse as it runs the game loop. Immortals are all set to idle off the mud after 2 hours. Lately though, this has been drastically cut to somewhere in the neighborhood of 15 minutes.

The deeper implications of this should be clear. This would suggest that all of the handlers that game_loop stalls time for are firing off at greatly increased intervals. Zeno was able to verify this easily enough on his game which is also affected by this ( same server etc ). It appears as though everything has been accelerated to about 8x normal speed.

Is this something that is known and can be corrected? Or have I stumbled into one of those uncharted areas us bleeding edgers often end up in?

EDIT: After a make clean and reboot, the skew is down to twice normal speed.
11 Feb, 2007, Omega wrote in the 2nd comment:
Votes: 0
so what your saying is that those of us on your system should change our pulse-ratios to keep the same update patterns.
11 Feb, 2007, Guest wrote in the 3rd comment:
Votes: 0
That hasn't been determined yet. You should check and make sure you're affected by the problem first. If you know imms get booted for idling for 2 hours, then you'll need to see if you get booted in 1, etc. If not then you've got nothing to worry about.
11 Feb, 2007, Devenon wrote in the 4th comment:
Votes: 0
If a coding fix is required, I think it may need to be made processor independent. Base it on real clock time so it wouldn't matter how fast (or slow) your processor was, it would still take 10 minutes (real time) per hour of game time, or whatever the ratio you'd want.
12 Feb, 2007, Omega wrote in the 5th comment:
Votes: 0
yeah, my pulses are skewed alittle, time updates for day/night are faster, bout twice as fast.

i'm gunna do some more tests though.
12 Feb, 2007, Guest wrote in the 6th comment:
Votes: 0
Well for whatever odd reason after the server me and Zeno are on crashed and rebooted, everything seems fine. Which leads me to think it was some kind of bizarre OS thing. I hate stuff like that. Always gotta keep watch for when it might come back.
12 Feb, 2007, Keberus wrote in the 7th comment:
Votes: 0
I think it would be nice to actually calibrate pulses based on real time, to be sure that the code is machine independent. Just not positive on how to do something like that. And if or when the timers should be updated, ect. Wondering if anyone else thinks it would be nice too, and if they have any ideas I guess.
12 Feb, 2007, Justice wrote in the 8th comment:
Votes: 0
SMAUG already uses a time adjusted pulse.

From FUSS:
* Synchronize to a clock.
* Sleep( last_time + 1/PULSE_PER_SECOND - now ).
* Careful here of signed versus unsigned arithmetic.
struct timeval now_time;
long secDelta;
long usecDelta;

gettimeofday( &now_time, NULL );
usecDelta = ( ( int )last_time.tv_usec ) - ( ( int )now_time.tv_usec ) + 1000000 / PULSE_PER_SECOND;
secDelta = ( ( int )last_time.tv_sec ) - ( ( int )now_time.tv_sec );
while( usecDelta < 0 )
usecDelta += 1000000;
secDelta -= 1;

while( usecDelta >= 1000000 )
usecDelta -= 1000000;
secDelta += 1;

if( secDelta > 0 || ( secDelta == 0 && usecDelta > 0 ) )
struct timeval stall_time;

stall_time.tv_usec = usecDelta;
stall_time.tv_sec = secDelta;
#ifdef WIN32
Sleep( ( stall_time.tv_sec * 1000L ) + ( stall_time.tv_usec / 1000L ) );
if( select( 0, NULL, NULL, NULL, &stall_time ) < 0 && errno != EINTR )
perror( "game_loop: select: stall" );
exit( 1 );
12 Feb, 2007, Guest wrote in the 9th comment:
Votes: 0
Time adjustment goes right out the window though if you have stuff like this: http://bugzilla.kernel.org/show_bug.cgi?...

They claim that bug is fixed, but maybe it wasn't completely fixed. If it's not fixed it could account for why code that hasn't been touched is suddenly screwy on dual-core systems.
12 Feb, 2007, Keberus wrote in the 10th comment:
Votes: 0
Wow, I feel a bit embarassed. :redface: thanks for pointing that out Justice.