15 Dec, 2008, David Haley wrote in the 21st comment:
Votes: 0
If you want to execute an external process, then yes. You could also try googling for something like "linux C CPU usage api". A lot of the references are scripts and the like. You could try running a script that dumps the output every 10 seconds or something like that and then making a graph of the load.
15 Dec, 2008, Zeno wrote in the 22nd comment:
Votes: 0
Here's some data:
Mon Dec 15 13:30:25 2008 :: [*****] LAG: Xio: save  (R:1010 S:2.069178)


Mon Dec 15 13:30:20 EST 2008
top - 13:30:20 up 136 days, 18:52, 1 user, load average: 0.00, 0.00, 0.00
Tasks: 46 total, 1 running, 45 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.0%sy, 0.1%ni, 99.3%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 524288k total, 225452k used, 298836k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 0k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 15 0 1980 564 536 S 0 0.1 0:03.43 init
3444 root 18 0 2604 108 104 S 0 0.0 0:00.00 xinetd
5806 zeno 18 0 2328 992 860 S 0 0.2 0:00.05 cpu.bash
5906 root 15 0 3184 208 148 S 0 0.0 0:01.03 crond
7986 root 18 0 9952 2828 2268 S 0 0.5 0:00.21 sshd
7996 root 15 0 2516 1364 1096 S 0 0.3 0:00.01 bash
8088 root 18 0 2760 1100 872 S 0 0.2 0:00.00 su
8090 zeno 15 0 2520 1432 1136 S 0 0.3 0:00.10 bash
9428 zeno 18 0 2664 64 60 S 0 0.0 0:00.00 startup
9445 zeno 18 0 2672 608 360 S 0 0.1 0:00.03 startup
9488 zeno 34 19 17996 14m 2988 S 0 2.9 190:26.77 biyg
9887 zeno 18 0 2116 916 720 R 0 0.2 0:00.00 top
9888 zeno 18 0 1596 408 340 S 0 0.1 0:00.00 head
————————————————————————-
Mon Dec 15 13:30:21 EST 2008
top - 13:30:22 up 136 days, 18:52, 1 user, load average: 0.00, 0.00, 0.00
Tasks: 46 total, 1 running, 45 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.0%sy, 0.1%ni, 99.3%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 524288k total, 225448k used, 298840k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 0k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 15 0 1980 564 536 S 0 0.1 0:03.43 init
3444 root 18 0 2604 108 104 S 0 0.0 0:00.00 xinetd
5806 zeno 15 0 2328 996 860 S 0 0.2 0:00.05 cpu.bash
5906 root 15 0 3184 208 148 S 0 0.0 0:01.03 crond
7986 root 18 0 9952 2828 2268 S 0 0.5 0:00.21 sshd
7996 root 15 0 2516 1364 1096 S 0 0.3 0:00.01 bash
8088 root 18 0 2760 1100 872 S 0 0.2 0:00.00 su
8090 zeno 15 0 2520 1432 1136 S 0 0.3 0:00.10 bash
9428 zeno 18 0 2664 64 60 S 0 0.0 0:00.00 startup
9445 zeno 18 0 2672 608 360 S 0 0.1 0:00.03 startup
9488 zeno 34 19 17996 14m 2988 S 0 2.9 190:26.77 biyg
9892 zeno 15 0 2116 916 720 R 0 0.2 0:00.00 top
9893 zeno 15 0 1592 400 340 S 0 0.1 0:00.00 head
————————————————————————-
Mon Dec 15 13:30:23 EST 2008
top - 13:30:23 up 136 days, 18:52, 1 user, load average: 0.00, 0.00, 0.00
Tasks: 46 total, 1 running, 45 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.0%sy, 0.1%ni, 99.3%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 524288k total, 225448k used, 298840k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 0k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 15 0 1980 564 536 S 0 0.1 0:03.43 init
3444 root 18 0 2604 108 104 S 0 0.0 0:00.00 xinetd
5806 zeno 15 0 2328 996 860 S 0 0.2 0:00.05 cpu.bash
5906 root 15 0 3184 208 148 S 0 0.0 0:01.03 crond
7986 root 18 0 9952 2828 2268 S 0 0.5 0:00.21 sshd
7996 root 15 0 2516 1364 1096 S 0 0.3 0:00.01 bash
8088 root 18 0 2760 1100 872 S 0 0.2 0:00.00 su
8090 zeno 15 0 2520 1432 1136 S 0 0.3 0:00.10 bash
9428 zeno 18 0 2664 64 60 S 0 0.0 0:00.00 startup
9445 zeno 18 0 2672 608 360 S 0 0.1 0:00.03 startup
9488 zeno 34 19 17996 14m 2988 S 0 2.9 190:26.77 biyg
9898 zeno 18 0 2116 916 720 R 0 0.2 0:00.00 top
9899 zeno 15 0 1592 400 340 S 0 0.1 0:00.00 head
————————————————————————-
Mon Dec 15 13:30:24 EST 2008
top - 13:30:25 up 136 days, 18:52, 1 user, load average: 0.00, 0.00, 0.00
Tasks: 46 total, 1 running, 45 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.0%sy, 0.1%ni, 99.3%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 524288k total, 225448k used, 298840k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 0k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20127 zeno 34 19 14176 11m 2816 S 2 2.2 105:45.39 biyg
1 root 15 0 1980 564 536 S 0 0.1 0:03.43 init
3444 root 18 0 2604 108 104 S 0 0.0 0:00.00 xinetd
5806 zeno 15 0 2328 996 860 S 0 0.2 0:00.05 cpu.bash
5906 root 15 0 3184 208 148 S 0 0.0 0:01.03 crond
7986 root 18 0 9952 2828 2268 S 0 0.5 0:00.21 sshd
7996 root 15 0 2516 1364 1096 S 0 0.3 0:00.01 bash
8088 root 18 0 2760 1100 872 S 0 0.2 0:00.00 su
8090 zeno 15 0 2520 1432 1136 S 0 0.3 0:00.10 bash
9428 zeno 18 0 2664 64 60 S 0 0.0 0:00.00 startup
9445 zeno 18 0 2672 608 360 S 0 0.1 0:00.03 startup
9488 zeno 34 19 17996 14m 2988 S 0 2.9 190:26.77 biyg
9908 zeno 15 0 2112 912 720 R 0 0.2 0:00.00 top
————————————————————————-
Mon Dec 15 13:30:26 EST 2008
top - 13:30:27 up 136 days, 18:52, 1 user, load average: 0.00, 0.00, 0.00
Tasks: 46 total, 1 running, 45 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.0%sy, 0.1%ni, 99.3%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 524288k total, 225452k used, 298836k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 0k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 15 0 1980 564 536 S 0 0.1 0:03.43 init
3444 root 18 0 2604 108 104 S 0 0.0 0:00.00 xinetd
5806 zeno 18 0 2328 996 860 S 0 0.2 0:00.05 cpu.bash
5906 root 15 0 3184 208 148 S 0 0.0 0:01.03 crond
7986 root 18 0 9952 2828 2268 S 0 0.5 0:00.21 sshd
7996 root 15 0 2516 1364 1096 S 0 0.3 0:00.01 bash
8088 root 18 0 2760 1100 872 S 0 0.2 0:00.00 su
8090 zeno 15 0 2520 1432 1136 S 0 0.3 0:00.10 bash
9428 zeno 18 0 2664 64 60 S 0 0.0 0:00.00 startup
9445 zeno 18 0 2672 608 360 S 0 0.1 0:00.03 startup
9488 zeno 34 19 17996 14m 2988 D 0 2.9 190:26.77 biyg
9918 zeno 18 0 2116 916 720 R 0 0.2 0:00.00 top
9919 zeno 18 0 1596 400 340 S 0 0.1 0:00.00 head
————————————————————————-
Mon Dec 15 13:30:28 EST 2008
top - 13:30:28 up 136 days, 18:52, 1 user, load average: 0.00, 0.00, 0.00
Tasks: 46 total, 1 running, 45 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.0%sy, 0.1%ni, 99.3%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 524288k total, 225444k used, 298844k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 0k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 15 0 1980 564 536 S 0 0.1 0:03.43 init
3444 root 18 0 2604 108 104 S 0 0.0 0:00.00 xinetd
5806 zeno 15 0 2328 996 860 S 0 0.2 0:00.05 cpu.bash
5906 root 15 0 3184 208 148 S 0 0.0 0:01.03 crond
7986 root 18 0 9952 2828 2268 S 0 0.5 0:00.21 sshd
7996 root 15 0 2516 1364 1096 S 0 0.3 0:00.01 bash
8088 root 18 0 2760 1100 872 S 0 0.2 0:00.00 su
8090 zeno 15 0 2520 1432 1136 S 0 0.3 0:00.10 bash
9428 zeno 18 0 2664 64 60 S 0 0.0 0:00.00 startup
9445 zeno 18 0 2672 608 360 S 0 0.1 0:00.03 startup
9488 zeno 34 19 17996 14m 2988 S 0 2.9 190:26.77 biyg
9931 zeno 18 0 2112 912 720 R 0 0.2 0:00.00 top
9932 zeno 15 0 1592 404 340 S 0 0.1 0:00.00 head



Mon Dec 15 13:35:40 2008 :: [*****] LAG: Xio: save  (R:1010 S:1.535619)


Mon Dec 15 13:35:34 EST 2008
top - 13:35:35 up 136 days, 18:57, 1 user, load average: 0.04, 0.02, 0.00
Tasks: 46 total, 2 running, 44 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.0%sy, 0.1%ni, 99.3%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 524288k total, 225448k used, 298840k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 0k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 15 0 1980 564 536 S 0 0.1 0:03.43 init
3444 root 18 0 2604 108 104 S 0 0.0 0:00.00 xinetd
5906 root 15 0 3184 208 148 S 0 0.0 0:01.03 crond
7986 root 15 0 9952 2828 2268 S 0 0.5 0:00.23 sshd
7996 root 15 0 2516 1364 1096 S 0 0.3 0:00.01 bash
8088 root 18 0 2760 1100 872 S 0 0.2 0:00.00 su
8090 zeno 17 0 2520 1432 1136 S 0 0.3 0:00.10 bash
9428 zeno 18 0 2664 64 60 S 0 0.0 0:00.00 startup
9445 zeno 18 0 2672 608 360 S 0 0.1 0:00.03 startup
9488 zeno 34 19 17996 14m 2988 S 0 2.9 190:27.22 biyg
10144 root 15 0 1648 552 464 S 0 0.1 0:22.76 syslogd
10147 root 20 0 1592 324 320 S 0 0.1 0:00.00 klogd
11968 zeno 15 0 2332 988 860 S 0 0.2 0:00.01 cpu.bash
————————————————————————-
Mon Dec 15 13:35:36 EST 2008
top - 13:35:36 up 136 days, 18:57, 1 user, load average: 0.04, 0.02, 0.00
Tasks: 46 total, 1 running, 45 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.0%sy, 0.1%ni, 99.3%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 524288k total, 225456k used, 298832k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 0k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 15 0 1980 564 536 S 0 0.1 0:03.43 init
3444 root 18 0 2604 108 104 S 0 0.0 0:00.00 xinetd
5906 root 15 0 3184 208 148 S 0 0.0 0:01.03 crond
7986 root 15 0 9952 2828 2268 S 0 0.5 0:00.23 sshd
7996 root 15 0 2516 1364 1096 S 0 0.3 0:00.01 bash
8088 root 18 0 2760 1100 872 S 0 0.2 0:00.00 su
8090 zeno 17 0 2520 1432 1136 S 0 0.3 0:00.10 bash
9428 zeno 18 0 2664 64 60 S 0 0.0 0:00.00 startup
9445 zeno 18 0 2672 608 360 S 0 0.1 0:00.03 startup
9488 zeno 34 19 17996 14m 2988 S 0 2.9 190:27.22 biyg
10144 root 15 0 1648 552 464 S 0 0.1 0:22.76 syslogd
10147 root 20 0 1592 324 320 S 0 0.1 0:00.00 klogd
11968 zeno 15 0 2332 988 860 S 0 0.2 0:00.01 cpu.bash
————————————————————————-
Mon Dec 15 13:35:37 EST 2008
top - 13:35:38 up 136 days, 18:57, 1 user, load average: 0.04, 0.02, 0.00
Tasks: 46 total, 1 running, 45 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.0%sy, 0.1%ni, 99.3%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 524288k total, 225448k used, 298840k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 0k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 15 0 1980 564 536 S 0 0.1 0:03.43 init
3444 root 18 0 2604 108 104 S 0 0.0 0:00.00 xinetd
5906 root 15 0 3184 208 148 S 0 0.0 0:01.03 crond
7986 root 15 0 9952 2828 2268 S 0 0.5 0:00.23 sshd
7996 root 15 0 2516 1364 1096 S 0 0.3 0:00.01 bash
8088 root 18 0 2760 1100 872 S 0 0.2 0:00.00 su
8090 zeno 17 0 2520 1432 1136 S 0 0.3 0:00.10 bash
9428 zeno 18 0 2664 64 60 S 0 0.0 0:00.00 startup
9445 zeno 18 0 2672 608 360 S 0 0.1 0:00.03 startup
9488 zeno 34 19 17996 14m 2988 S 0 2.9 190:27.22 biyg
10144 root 15 0 1648 552 464 S 0 0.1 0:22.76 syslogd
10147 root 20 0 1592 324 320 S 0 0.1 0:00.00 klogd
11968 zeno 18 0 2332 988 860 S 0 0.2 0:00.01 cpu.bash
————————————————————————-
Mon Dec 15 13:35:39 EST 2008
top - 13:35:39 up 136 days, 18:57, 1 user, load average: 0.04, 0.02, 0.00
Tasks: 46 total, 1 running, 45 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.0%sy, 0.1%ni, 99.3%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 524288k total, 225448k used, 298840k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 0k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20127 zeno 34 19 14176 11m 2816 S 12 2.2 105:45.94 biyg
1 root 15 0 1980 564 536 S 0 0.1 0:03.43 init
3444 root 18 0 2604 108 104 S 0 0.0 0:00.00 xinetd
5906 root 15 0 3184 208 148 S 0 0.0 0:01.03 crond
7986 root 15 0 9952 2828 2268 S 0 0.5 0:00.23 sshd
7996 root 15 0 2516 1364 1096 S 0 0.3 0:00.01 bash
8088 root 18 0 2760 1100 872 S 0 0.2 0:00.00 su
8090 zeno 17 0 2520 1432 1136 S 0 0.3 0:00.10 bash
9428 zeno 18 0 2664 64 60 S 0 0.0 0:00.00 startup
9445 zeno 18 0 2672 608 360 S 0 0.1 0:00.03 startup
9488 zeno 34 19 17996 14m 2988 S 0 2.9 190:27.22 biyg
10144 root 15 0 1648 552 464 S 0 0.1 0:22.76 syslogd
10147 root 20 0 1592 324 320 S 0 0.1 0:00.00 klogd
————————————————————————-
Mon Dec 15 13:35:40 EST 2008
top - 13:35:41 up 136 days, 18:57, 1 user, load average: 0.04, 0.02, 0.00
Tasks: 46 total, 1 running, 45 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.0%sy, 0.1%ni, 99.3%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 524288k total, 225460k used, 298828k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 0k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 15 0 1980 564 536 S 0 0.1 0:03.43 init
3444 root 18 0 2604 108 104 S 0 0.0 0:00.00 xinetd
5906 root 15 0 3184 208 148 S 0 0.0 0:01.03 crond
7986 root 15 0 9952 2828 2268 S 0 0.5 0:00.23 sshd
7996 root 15 0 2516 1364 1096 S 0 0.3 0:00.01 bash
8088 root 18 0 2760 1100 872 S 0 0.2 0:00.00 su
8090 zeno 17 0 2520 1432 1136 S 0 0.3 0:00.10 bash
9428 zeno 18 0 2664 64 60 S 0 0.0 0:00.00 startup
9445 zeno 18 0 2672 608 360 S 0 0.1 0:00.03 startup
9488 zeno 34 19 17996 14m 2988 D 0 2.9 190:27.23 biyg
10144 root 15 0 1648 552 464 S 0 0.1 0:22.76 syslogd
10147 root 20 0 1592 324 320 S 0 0.1 0:00.00 klogd
11968 zeno 15 0 2332 988 860 S 0 0.2 0:00.01 cpu.bash
————————————————————————-
Mon Dec 15 13:35:42 EST 2008
top - 13:35:43 up 136 days, 18:57, 1 user, load average: 0.04, 0.02, 0.00
Tasks: 46 total, 1 running, 45 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.0%sy, 0.1%ni, 99.3%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 524288k total, 225452k used, 298836k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 0k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 15 0 1980 564 536 S 0 0.1 0:03.43 init
3444 root 18 0 2604 108 104 S 0 0.0 0:00.00 xinetd
5906 root 15 0 3184 208 148 S 0 0.0 0:01.03 crond
7986 root 15 0 9952 2828 2268 S 0 0.5 0:00.23 sshd
7996 root 15 0 2516 1364 1096 S 0 0.3 0:00.01 bash
8088 root 18 0 2760 1100 872 S 0 0.2 0:00.00 su
8090 zeno 17 0 2520 1432 1136 S 0 0.3 0:00.10 bash
9428 zeno 18 0 2664 64 60 S 0 0.0 0:00.00 startup
9445 zeno 18 0 2672 608 360 S 0 0.1 0:00.03 startup
9488 zeno 34 19 17996 14m 2988 S 0 2.9 190:27.23 biyg
10144 root 15 0 1648 552 464 S 0 0.1 0:22.76 syslogd
10147 root 20 0 1592 324 320 S 0 0.1 0:00.00 klogd
11968 zeno 15 0 2332 988 860 S 0 0.2 0:00.01 cpu.bash
————————————————————————-
Mon Dec 15 13:35:44 EST 2008
top - 13:35:44 up 136 days, 18:57, 1 user, load average: 0.03, 0.02, 0.00
Tasks: 46 total, 1 running, 45 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.0%sy, 0.1%ni, 99.3%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 524288k total, 225456k used, 298832k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 0k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 15 0 1980 564 536 S 0 0.1 0:03.43 init
3444 root 18 0 2604 108 104 S 0 0.0 0:00.00 xinetd
5906 root 15 0 3184 208 148 S 0 0.0 0:01.03 crond
7986 root 15 0 9952 2828 2268 S 0 0.5 0:00.23 sshd
7996 root 15 0 2516 1364 1096 S 0 0.3 0:00.01 bash
8088 root 18 0 2760 1100 872 S 0 0.2 0:00.00 su
8090 zeno 17 0 2520 1432 1136 S 0 0.3 0:00.10 bash
9428 zeno 18 0 2664 64 60 S 0 0.0 0:00.00 startup
9445 zeno 18 0 2672 608 360 S 0 0.1 0:00.03 startup
9488 zeno 34 19 17996 14m 2988 S 0 2.9 190:27.23 biyg
10144 root 15 0 1648 552 464 S 0 0.1 0:22.76 syslogd
10147 root 20 0 1592 324 320 S 0 0.1 0:00.00 klogd
11968 zeno 15 0 2332 988 860 S 0 0.2 0:00.01 cpu.bash

Honestly not seeing anything here. 5 players on the MUD, not a lot going on.
15 Dec, 2008, David Haley wrote in the 23rd comment:
Votes: 0
Indeed nothing looks out of the ordinary. I guess that at this point all you can do is ask them if anything was happening at those times.

It's possible that they don't know enough about system administration to even know how to answer the question, but at this point it looks like it's something out of your control.

Again, if the problem is on the underlying physical box, there's nothing you'll be able to do or see yourself.
15 Dec, 2008, Zeno wrote in the 24th comment:
Votes: 0
Quote
Again, if the problem is on the underlying physical box, there's nothing you'll be able to do or see yourself.

"You" meaning what? Me, or the support team (or anyone)? If you mean me, I've never been looking to do something with this as I'm pretty sure it's not a problem on my end (code).

Scandum says this is typical on a VPS, but I don't have another VPS to test this with.
15 Dec, 2008, Davion wrote in the 25th comment:
Votes: 0
If it usually only happens on grabs to the harddrive, then this is kinda expected. If there's quite a few VPS's on your machine, and they're all trying to grab from the hdd, yours will have to wait in line. Might wanna consider threading your file i/o to avoid the lag. Something there also strikes me as odd. Note at the time in question biyg's CPU usage shoots up.
15 Dec, 2008, Zeno wrote in the 26th comment:
Votes: 0
Hm, I've seen it happen on: kill, mpmload, save, give, mpforce

Couldn't get it to happen with glance.
15 Dec, 2008, Guest wrote in the 27th comment:
Votes: 0
I think it's safe to say the physical machine is acting up if you've been unable to document a CPU spike within your own VPS. The trouble is now that since they're telling you MUDs violate the ToS, you have to be careful how you go about opening a trouble ticket with the company. If they decide you're doing something against the rules, you could find yourself cut off with no notice. With that in mind, make some backups while you still can.
15 Dec, 2008, Zeno wrote in the 28th comment:
Votes: 0
Davion: Does that mean if I write some basic C code to read/write and time it, it'll eventually lag so that I can show it to the techs?

Samson: When I signed up, they said I needed a valid reason to use ports. I said for MUDs. When they later said MUDs were against the ToS, I pointed them to the ticket where they accepted MUDs being run as a valid reason to use ports. They allowed it. Wouldn't say I'm in the clear, but right now I'm okay.
15 Dec, 2008, Caius wrote in the 29th comment:
Votes: 0
I had this exact problem on my old VPS. It's the very reason why I changed to another one. I haven't had any lags at all since, not even when saving all areas in one go, and similar. They've probably crammed too many VPSs into a single server.
15 Dec, 2008, David Haley wrote in the 30th comment:
Votes: 0
Zeno said:
"You" meaning what? Me, or the support team (or anyone)? If you mean me, I've never been looking to do something with this as I'm pretty sure it's not a problem on my end (code).

I meant that you will never be able to see the process or activity that is consuming the resources if it's a problem on the physical machine instead of your virtual machine.

Zeno said:
Davion: Does that mean if I write some basic C code to read/write and time it, it'll eventually lag so that I can show it to the techs?

You could try that, the problem is that this doesn't prove much either.


In the end of the day, it seems like you are suffering from bad resource management on the physical machine that hosts the virtual machines. One of them is probably occasionally acting up and using a bunch of resources, causing everybody else to hurt. A common cause is swapping, as somebody mentioned earlier. It could be that they have more RAM allocated to virtual machines than they actually have available RAM, meaning that the VMs are competing for RAM and segments of VMs are getting swapped out. Swapping is very bad for performance because the whole thread (in this case your whole MUD, or even the whole VM depending on where the swapping is) will be blocking.

It's possible, likely even, that threading will do nothing for you if the slow-down is at the VM level and not the process level.


Frankly, I think your only option is to tell them that you're seeing unusual usage spikes, that you cannot explain. If you get lousy responses, you know that some combination of the following is true: (a) they don't care and therefore have crappy support, (b) they're incompetent and don't know how to answer the question, © they're assuming you're stupid and therefore still have crappy support, (d) they're incompetent and don't know that their system is set up in such a way as to create these problems, and the list goes on.
15 Dec, 2008, Zeno wrote in the 31st comment:
Votes: 0
Yeah, I wasn't trying to "prove" the issue, just show where I am getting the concern from (aside from my MUD, because they'd probably pull something like "we aren't meant for muds" like Bluehost did).

I had submitted a ticket about this before when they tried to get me for hosting a MUD. In the middle of it, one thing they said was:
Quote
Some of our techs are investigating the server but everything appears to be normal. I will have admins checking it later on today or tomorrow during the day, but if they do not find anything I am afraid there is not much we can do about it. So far we did not receive any complains from users on host node where you are located.
15 Dec, 2008, quixadhal wrote in the 32nd comment:
Votes: 0
The only thing that jumps out at me from your top posting is this line:

Quote
# PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
# 20127 zeno 34 19 14176 11m 2816 S 12 2.2 105:45.94 biyg


Notice the "NIce" value of 19? That means your process has the lowest priority of anything else on the machine. If the machine isn't overloaded (and this doesn't appear to be), that normally doesn't matter much. However, if something on there IS causing very brief CPU spikes, your mud will be among the very last processes given a time slice. I'm not familiar with the VM subsystem on modern kernels, but if there were a brief memory usage, it would probably also be among the first things swapped out.

A "nice" value of 0 is normal. Negative numbers are higher-than-normal and usually should only go to things that are time critical (such as device drivers). Positive numbers are lower priority (more nice), and the range is -20 to 19.

It might be a red herring, since your system doesn't seem to be busy in any way, but it also might be worth asking why your processes are ranked so low. I know at university, we used to put graduate processes at nice 1, and undergrad at nice 2, just so system tasks got priority and grads got a slight edge on undergrads. :)

FYI, the historical way these worked was as a literal priority queue, where you shifted the values so they went from 0 to 39, and literally things in queue 0 were processed every trip through the run queue. Things in queue 39 were skipped 39 times through the loop.
15 Dec, 2008, David Haley wrote in the 33rd comment:
Votes: 0
Recall that this is in a VM, so the issue could be coming from the VM process on the host physical machine, and not anything at all that Zeno can see. It's not uncommon to see processes assigned a 'niceness' of 19, and yet run just as fast as they need to.
15 Dec, 2008, Davion wrote in the 34th comment:
Votes: 0
quixadhal said:
It might be a red herring, since your system doesn't seem to be busy in any way, but it also might be worth asking why your processes are ranked so low.


I don't think it matters on a VPS. He can set his own nice value of his procs.
15 Dec, 2008, Guest wrote in the 35th comment:
Votes: 0
quixadhal said:
The only thing that jumps out at me from your top posting is this line:

Quote
# PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
# 20127 zeno 34 19 14176 11m 2816 S 12 2.2 105:45.94 biyg


Notice the "NIce" value of 19?


This may seem like a dumb question, but how do you get it to display the "nice" level? I don't see anything in the ps command's help to indicate showing that.
15 Dec, 2008, Cratylus wrote in the 36th comment:
Votes: 0
Samson said:
quixadhal said:
The only thing that jumps out at me from your top posting is this line:

Quote
# PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
# 20127 zeno 34 19 14176 11m 2816 S 12 2.2 105:45.94 biyg


Notice the "NIce" value of 19?


This may seem like a dumb question, but how do you get it to display the "nice" level? I don't see anything in the ps command's help to indicate showing that.


Looks like output from the top command, not ps.

-Crat
http://lpmuds.net
15 Dec, 2008, Guest wrote in the 37th comment:
Votes: 0
See, I told you it would sound dumb. Pays to read things more carefully.

Looked at mine real quick, seems to be more or less default behavior for user processes to run at "nice 19" since all the ones that popped up on top for my servers are doing that.
16 Dec, 2008, Scandum wrote in the 38th comment:
Votes: 0
While at it, one bottleneck I know of is having more than a hundred files in a directory which can give lag spikes between 0.1 and 2 seconds when infrequently accessing a player file, it has something to do with cashing. It's especially bad if you dump all the player files in player/ and your mud has no auto purger.

This is easily fixed by storing Bubba in player/b/u, etc.
16 Dec, 2008, Zeno wrote in the 39th comment:
Votes: 0
Hm. Well I was getting this lag on "Xio", so I looked in the "x" directory. Only 2 files.
16 Dec, 2008, David Haley wrote in the 40th comment:
Votes: 0
That is a disk problem. But frankly, having even hundreds files shouldn't cause a program to halt for a few seconds unless the files are stored on a network, or the disk is already under pretty heavy load…

(If you don't believe me, just try running ls in /usr/bin or something on your local machine, it should be very fast; in fact, it should take longer to print that it takes to read the files!)
20.0/48