09 Jul, 2012, Kline wrote in the 1st comment:
Votes: 0
I wanted to see if anyone has done something similar to this before and has good / bad ugly stories or suggestions. I was planning to implement a command system of strictly shared object files, one per command, so that they could be re-written/loaded individually ala LPMUD style.

I've worked with dlsym and shared objects before, so that's not too big an issue. However, I also wanted to allow for most if not all of this to occur from within the game. Admins have the option to mass-compile all commands pre-boot, or to thread out a compile process from within the game.

To test this, I wrote a small program to see if it was feasible and would work, and it appears so. Aside from the security implications of firing off commands into the shell via popen(), is there anything else I'm overlooking about this? Would there be an exceptional amount of overhead to potentially 100's of very small shared libraries being loaded?

I like the idea of having what could potentially be a very flexible plugin-style system with an appropriate API, but wary of investing too much time working on this if it won't scale properly or has some other significant flaw I'm overlooking.

Here's the GitHub for the test I wrote: https://github.com/Kline-/tools/tree/mas...
10 Jul, 2012, plamzi wrote in the 2nd comment:
Votes: 0
Kline said:
Would there be an exceptional amount of overhead to potentially 100's of very small shared libraries being loaded?


Why wouldn't you want to compile all commands into a single shared library? The code for individual commands can still be split up into small files if that makes it easier to edit.

If you have concerns over reloading a shared library with all your commands vs. one with only one command, don't worry. In my implementation of dynamic loading (in C), I attach a shared object version of the main binary, i. e. all the code is contained in it and overrides the code in the main binary with more up-to-date versions where applicable. And it still reloads virtually instantaneously.

Not sure whether having hundreds of shared libraries creates any overhead performance-wise, but it will definitely make maintenance and debugging trickier.
10 Jul, 2012, Kline wrote in the 3rd comment:
Votes: 0
plamzi said:
Why wouldn't you want to compile all commands into a single shared library?

Why re-compile/link more than necessary? I can also see implementing a form of security similar to LP, like perhaps letting a lower-level imm have "their module" or "their command" that they can dev/tinker with but is restricted from accessing other data. This seems like it would be easier to manage "per-module" rather than "per-function", so why lump them all into a single library?

plamzi said:
Not sure whether having hundreds of shared libraries creates any overhead performance-wise, but it will definitely make maintenance and debugging trickier.

Can you elaborate, please? How would debugging issues in something like do_buy() within a single monolithic library be vastly different than within a self-contained one? Ideally I'd like to create a system that really is just a plugin API so that commands / zones / features / etc can just be added or removed at a whim without much impact or concern elsewhere.
10 Jul, 2012, Rarva.Riendf wrote in the 4th comment:
Votes: 0
Quote
I'd like to create a system that really is just a plugin API so that commands / zones / features / etc can just be added or removed at a whim without much impact or concern elsewhere.

You will not achieve that by API Plug-In, whatever but by coding practice. Especially with a good IDE that can easily refactor/remove everything like do_buy wherever it is in the code.
Unless the only impact you are talking about is the number of files to be modified by a command fetures, etc.
10 Jul, 2012, Kline wrote in the 5th comment:
Votes: 0
Rarva.Riendf said:
You will not achieve that by API Plug-In, whatever but by coding practice.

Any reason why? Most snippet commands and stuff are pretty easy drop-ins for a Diku base; paste the function code and add it to your command table. I don't see this being exceptionally different. I agree that more in-depth systems would certainly require more re-working of other parts of the game, but I don't see why common elements such as commands, areas, a logging engine, etc, couldn't be handled in this manner with relatively no disruption or modification elsewhere.
10 Jul, 2012, Davion wrote in the 6th comment:
Votes: 0
Kline said:
Any reason why? Most snippet commands and stuff are pretty easy drop-ins for a Diku base; paste the function code and add it to your command table.


That may be true for simple things like doscore, or any other thing that either gets existing information of modifies existing information. Say though, your snippet calls for adding a new affect bit, or skill/spell, you're up the creek ;).
10 Jul, 2012, quixadhal wrote in the 7th comment:
Votes: 0
LpMUD and an engine using dlsym and friends are doing two entirely different things. All LpMUD's interpret LPC code at run time, implemented as a sandboxed language with limited API's to access files and sockets from inside the sandbox. Dlsym is simply linking new object code into the executable's namespace at runtime, which (a) isn't interpreted, and (b) isn't sandboxed.

Also, if you're thinking of using it as a way to distribute binary snippets, good luck. In most cases, objects can only be loaded if run under the same kernel, linked against the same shared libraries, and running on the same architecture hardware.

While it sounds cool, in practice dynamic loading really only buys you two things. One is the ability to lazy-load chunks of code, which isn't all that useful nowadays when we have tons of RAM and the OS can already do that via demand paged executables, and the other is the ability to add simple things (like new commands) on the fly, and that's also mostly irrelevant since doing a "hotboot" is simpler and almost equally transparent to the users (if you save sufficient state info to resume combat, etc).
10 Jul, 2012, Runter wrote in the 8th comment:
Votes: 0
I don't really see the benefits when compared to the added complexity. The problems you seem to want to solve are all better solved just embracing an interpreted language on top of your driver.
10 Jul, 2012, Tyche wrote in the 9th comment:
Votes: 0
Kline said:
I wanted to see if anyone has done something similar to this before and has good / bad ugly stories or suggestions. I was planning to implement a command system of strictly shared object files, one per command, so that they could be re-written/loaded individually ala LPMUD style.

Here's a previous attempt at implementing something very similar:
ftp://sourcery.dyndns.org/archive/servers/othe...

I really don't have the faintest clue what the performance characteristics would be if you had hundreds or thousands of shared objects loaded.
10 Jul, 2012, plamzi wrote in the 10th comment:
Votes: 0
Kline said:
plamzi said:
Why wouldn't you want to compile all commands into a single shared library?

Why re-compile/link more than necessary? I can also see implementing a form of security similar to LP, like perhaps letting a lower-level imm have "their module" or "their command" that they can dev/tinker with but is restricted from accessing other data. This seems like it would be easier to manage "per-module" rather than "per-function", so why lump them all into a single library?


Well, because a "library" of one function is overkill, or maybe "underkill". A library is by definition a set of functions that "go together". If you split every command into its own library, keeping their scopes and cross-calls straight will get unnecessarily complicated.

And as for security/limiting edit access, I'm not sure how splitting each command into its own shared object helps that cause. If you want to control who gets to edit what, then you will obviously need to code something in the in-game editor to allow only certain users to change certain code. It's just for simplicity that you will be compiling and re-attaching one shared object (all commands) regardless of what command was modified. At least, I would design it that way, because it's simpler.

plamzi said:
Not sure whether having hundreds of shared libraries creates any overhead performance-wise, but it will definitely make maintenance and debugging trickier.

Can you elaborate, please? How would debugging issues in something like do_buy() within a single monolithic library be vastly different than within a self-contained one? Ideally I'd like to create a system that really is just a plugin API so that commands / zones / features / etc can just be added or removed at a whim without much impact or concern elsewhere.

If a crash happened inside a shared object, for gdb to show you meaningful symbol data, it will need to find the right shared library in the right path(s). In my case, creating a solution that didn't require modifying environment variables every time I move the code required that I create copies of the shared library in the same relative path from the binary's working directory. To be sure, I'm making changes fast and I want to keep the last 6 versions of all my compiled code, w/ its corresponding core, to debug against. If I had to deal with copying/moving several hundreds of files every time I want to create a "snapshot", I would definitely feel like an inventory clerk.

As a general rule, I find that having to maintain thousands of files is a headache. When I started working on this code, it used to take almost an hour to backup the game because of all the individual item save and log files. Pretty much everything is in a database now–backup takes 3 min. and takes up 1/10 of the disk space (gzipped databases).

In fact, if I were to implement in-game coding, I would put each function in a database, then dump, compile and attach as needed. I'd still compile one library in all cases, though–it just seems so much simpler. My 2c.
10 Jul, 2012, plamzi wrote in the 11th comment:
Votes: 0
Runter said:
I don't really see the benefits when compared to the added complexity. The problems you seem to want to solve are all better solved just embracing an interpreted language on top of your driver.


He says the goal is to rewrite and reload commands at runtime. Using shared objects, a basic implementation of this can be achieved in 20-30 lines–I know because I've done it. It seems to me that an interpreted language will require a lot more work, even if you're just integrating one that's tailor-made for this kind of thing.

The advantage of using an interpreted language is that it's sandboxed and it won't be as easy to hurt yourself.
10 Jul, 2012, Runter wrote in the 12th comment:
Votes: 0
plamzi said:
Runter said:
I don't really see the benefits when compared to the added complexity. The problems you seem to want to solve are all better solved just embracing an interpreted language on top of your driver.


He says the goal is to rewrite and reload commands at runtime. […]

Yes, better solved by interpreting code.

Quote
The advantage of using an interpreted language is that it's sandboxed and it won't be as easy to hurt yourself.


There's C interpreters and they're not any more sandboxed than C itself. I think you're making a bad assumption that every interpreted language is a lua or javascript implementation. Most people find that almost all interpreters are nearly impossible to use as a safe sandbox.
10 Jul, 2012, Rarva.Riendf wrote in the 13th comment:
Votes: 0
Kline said:
Rarva.Riendf said:
You will not achieve that by API Plug-In, whatever but by coding practice.

Any reason why? Most snippet commands and stuff are pretty easy drop-ins for a Diku base;

And that is the reason you see them as snippets. Everything else impact so many pieces of code no one bother posting them as it would require an extensive documentation.
11 Jul, 2012, plamzi wrote in the 14th comment:
Votes: 0
@Runter

The reason I'm assiming you meant something like lua or js is because interpreted C makes no sense to me when you can just write C and load it dynamically. The latter can be implemented in 15-20 min. What exactly are you suggesting and is it more bang for your buck?

I'm also assuming that any interpreted language that is a subset of another is easier to sandbox in terms of not letting it crash you. From what I've seen, lua and js implementations come pre-sandboxed. I'd never heard of interpreted C until now–this may not be true for it.
12 Jul, 2012, Kline wrote in the 15th comment:
Votes: 0
Thanks for all the comments; I think. Now I'm kicking about even more things in my head and haven't decided which route I want to pursue. Lots of interesting points raised!
12 Jul, 2012, plamzi wrote in the 16th comment:
Votes: 0
If you're writing in C, and decide to use shared objects, PM me for some code bits. About 95% of my codebase can be loaded dynamically.
0.0/16