/*
* Synchronize to a clock.
* Sleep( last_time + 1/PULSE_PER_SECOND - now ).
* Careful here of signed versus unsigned arithmetic.
*/
{
struct timeval now_time;
long secDelta;
long usecDelta;
gettimeofday( &now_time, NULL );
usecDelta = ( ( int )last_time.tv_usec ) - ( ( int )now_time.tv_usec ) + 1000000 / PULSE_PER_SECOND;
secDelta = ( ( int )last_time.tv_sec ) - ( ( int )now_time.tv_sec );
while( usecDelta < 0 )
{
usecDelta += 1000000;
secDelta -= 1;
}
while( usecDelta >= 1000000 )
{
usecDelta -= 1000000;
secDelta += 1;
}
if( secDelta > 0 || ( secDelta == 0 && usecDelta > 0 ) )
{
struct timeval stall_time;
stall_time.tv_usec = usecDelta;
stall_time.tv_sec = secDelta;
#ifdef WIN32
Sleep( ( stall_time.tv_sec * 1000L ) + ( stall_time.tv_usec / 1000L ) );
#else
if( select( 0, NULL, NULL, NULL, &stall_time ) < 0 && errno != EINTR )
{
perror( "game_loop: select: stall" );
exit( 1 );
}
#endif
}
}
I recently upgraded both of my servers to dual core AMD64 CPUs. Obviously I am thrilled with the new speed and such. But it seems to have had an interesting side affect I wasn't counting on.
In the game_loop handlers that deal with idling descriptors, my code is just like any other Merc derivative. It adds one to the idle counter with each pulse as it runs the game loop. Immortals are all set to idle off the mud after 2 hours. Lately though, this has been drastically cut to somewhere in the neighborhood of 15 minutes.
The deeper implications of this should be clear. This would suggest that all of the handlers that game_loop stalls time for are firing off at greatly increased intervals. Zeno was able to verify this easily enough on his game which is also affected by this ( same server etc ). It appears as though everything has been accelerated to about 8x normal speed.
Is this something that is known and can be corrected? Or have I stumbled into one of those uncharted areas us bleeding edgers often end up in?
EDIT: After a make clean and reboot, the skew is down to twice normal speed.