You would need to use the information provided by such functions along with a parser for the DWARF debugging format to lookup the necessary information to generate a more useful backtrace. There is, to my knowledge, no non-GPL code available for doing this.
Scary caffeine-induced idea. Use the dlsym library routines to open your open executable's namespace and, in the backtrace function, use the GNU extension dladdr() to try and match the addresses provided by the backtrace to a symbol name. You have to have compiled and linked with -rdynamic and probably -ldl in order to try this. :)
theres no harm in using a backtrace function, especially if it uses GDB itself and just reports to a internal log.
Unless your working it on a segfault, then you won't get much out of it. But if your have an annoying glitch in the mud, and you suspect whats causing it, an internal backtrace can be quite useful. Atleast in my opinion.
There's really no point in replicating gdb inside your MUD if you can reliably reproduce the bug. If you have an idea which line is the culprit, set a breakpoint normally and be happy.
The code-generated backtrace is perhaps useful in cases where the bug is very hard to reproduce, but you would be better off using assertions to cause the program to fail, so that you get a proper core dump. After all, if you already suspect a line as the culprit, you can look at the call tree to get the (probably limited) set of entry points for that function.
I would hazard a guess that the vast majority of crasher bugs in most Diku-related codebases are from pointer access where the pointer isn't valid (IE: ch->pc_data->booger, or room->people[k]->inventory[j]), or from string overflows (hence the ever-increasing value of MAX_STRING_LENGTH, and use of 2*MSL buffers).
In most of those cases, nothing has been printed to the logs recently, and with the crash, nothing will be printed to the logs. You literally have no breadcrumbs anywhere near the crime scene, and many of them will only happen in the right circumstances (2 players in a room and a third walks in holding a bugged item, someone tries to set their description – yeah, that never happens – and overflows something, not in that code, but perhaps in the mob-program trigger to read it).
A backtrace that just shows the calling stack would tell me what functions to look around, and half the time I'll spot the error without even needing to run the driver in gdb. The other half, at least I know where to set breakpoints and watches.
Those of us who've worked on LP muds, or in non-compiled languages like perl, python, etc, get used to having any crash tell you a little bit about what the environment was like as it crashed. Not having that is like answering the phone without turning the room lights on. Sure, you can do it, but if the phone isn't where you expected it to be, there will be much cursing and stubbing of toes trying to get to it.
I'm not sure the argument was against having some knowledge of where the crash occurred, rather, it was against the idea of replicating gdb inside the game. The above backtrace gives the stack trace that you need, without all the pain of learning to fiddle with the debugger libraries etc.
05 Feb, 2009, quixadhal wrote in the 11th comment:
Rojan QDel said:
The issue is that its output isn't exactly human readable: Obtained 12 stack frames. /home/anna/mp/src/swreality(_Z12MudBackTracev+0x1a) [0x816fd8e] /home/anna/mp/src/swreality [0x813aa28] /lib/tls/libc.so.6 [0x53e918] /home/anna/mp/src/swreality(_Z6damageP9char_dataS0_ii+0x18c6) [0x817f57e] /home/anna/mp/src/swreality(_Z7one_hitP9char_dataS0_i+0x379d) [0x817d79b]
Something about this thread bugged me this morning, even though it's been a couple days. In poking around a bit, I have to ask, are you compiling with C++? Also, did you compile with debugging symbols enabled (-g)?