14 Feb, 2009, JohnnyStarr wrote in the 1st comment:
Votes: 0
Hey all,

Just wondering if someone has created a mysql backend
to house / edit area info instead of the typical .are file method.

I am running a rom based mud and this would be helpful if someone
could give me a pointer.
14 Feb, 2009, Zeno wrote in the 2nd comment:
Votes: 0
People have done a lot with MySQL, so I wouldn't be surprised if they did areas too. I'm sure there are MUDs out there storing data fully in MySQL.

If you aren't sure how to get started using MySQL in a codebase, try looking at AFKMUD's sql.c and sql.h.
14 Feb, 2009, JohnnyStarr wrote in the 3rd comment:
Votes: 0
wow, thanks for the speedy reply :)

very helpful link as well. :alien:
15 Feb, 2009, Caius wrote in the 4th comment:
Votes: 0
I store all my mud data in a PostgreSQL database, including areas. Other than that, I'm not exactly sure what your question is?
15 Feb, 2009, Skol wrote in the 5th comment:
Votes: 0
Hm, maybe a RaM version…. MyRam, but have redundant saving to .are files and pfiles periodically perhaps? (as a method of backup etc).
15 Feb, 2009, David Haley wrote in the 6th comment:
Votes: 0
If you want backups, periodically dump the database. Storing to a different file format means maintaining separate load/save routines which is a recipe for trouble.
16 Feb, 2009, JohnnyStarr wrote in the 7th comment:
Votes: 0
Ok, thanks for the replies.

But here is my concern. The flat db files (.are) is kind of a bad non-relational style.
With MySQL or any other relational db you could enforce more structure.

Trust me, this is not an easy decision, i suppose there is a downside to them both.

DavidHaley mentioned that maintaining separate load/save routines is a recipe for trouble.
I def agree, more or less, a big pain, plus a potential hazard.

Main main 'beef' with the data structure, is it seems adding anything is a huge burden.
IE Say i wanted to add a flag called "rarity" to Objects. eg. Rare, Common, Uncommon, Epic ETC.
I'm stuck having to add it to the code in million places, then i would have to edit every .are file
and i would have to edit the load/save methods as well.

You guys have been around forever, and i know that i'm generally new to mud programming.
I appreciate your views. Because I come from a strict Object Oriented background, do you think
that i should just bite the bullet and make the many changes, or should i try to create a more OOP / Encapsulated approach?
16 Feb, 2009, quixadhal wrote in the 8th comment:
Votes: 0
There's also a question of how you want to make use of a database in your code. If all you're doing is replacing the data store (IE: load/save for everything), it's a different endeavor than allowing ad-hoc queries to be run (by your wizards, or your scripts, or your special procedures).

There are two basic approaches to SQL support in a C codebase. One is to pretend you're in a string-based language and write simple wrappers to essentially sprintf the query text, escaping the strings as needed, and shooting it to the database. Results would typically get returned as arrays of strings, which you then have to pick back apart and put into whatever local variables you are using.

The other way is to use bound variables, and that's what I suggest – especially if you limit the number of places in the code you have to do it. Binding the variables as parameters means you don't have to mess about with escaping strings (the API layer handles it for you), and most API's will allocate space for the result sets as needed, in some cases preserving types… so if you query a row that has two ints and a varchar, you get a structure that has two ints and a char * back.

Changing your data structures will still require you to change your code, but if you can keep it organized well enough, it'll only be in a couple places. :)
16 Feb, 2009, Davion wrote in the 9th comment:
Votes: 0
staryavsky said:
Main main 'beef' with the data structure, is it seems adding anything is a huge burden.
IE Say i wanted to add a flag called "rarity" to Objects. eg. Rare, Common, Uncommon, Epic ETC.
I'm stuck having to add it to the code in million places, then i would have to edit every .are file
and i would have to edit the load/save methods as well.


This isn't entirely true. You definitely wouldn't have to edit every .are file. First, you'd modify the save routine, copyover, then asave world. Then you'd modify the load routine to reflect your changes with saving. Copyover, and bam. New format in. You could also do it via versioning your area files, much the same way pfiles are. The only really good reason to switch to SQL is for easy (extensive) web-based integration. Other than that, you can pretty much accomplish the same effects as having a SQL db with much less code than actually converting your entire codebase (assuming little to no website integrations).

You really should look at this project and think "How much time will I save -once- every thing's converted" and also, "How much time will it take to convert". This will give you a better idea if it's actually worth it to do these things. Now, I'd love to help you with these questions, but honestly, I have no idea how fast you could write this stuff… only you can answer that. If you can't, you may not have the experiences necessary to do it. If this is the case, I'd say go for it :P. As this shows you require experience, doing this for that benefit alone is enough to take on such a project. Gaining experience is everything in that situation.
16 Feb, 2009, David Haley wrote in the 10th comment:
Votes: 0
Personally I think that a conversion to SQL is not really productive unless you're planning on doing something else with the SQL data. The main advantage it has in this regard is that lots of languages can "talk" SQL, so you don't have to keep writing new save/load routines.

Using SQL isn't some kind of magic solution that will solve all of your problems. You will still need to either load the data from SQL into some kind of in-game data structures, or write some kind of wrapper that makes a SQL query every time some value is needed.

And Davion is right, and I would add that a new field wouldn't mean changing that much code either. Obviously you'd have to add code wherever you wanted to use the field, but that is true regardless of the data storage method.

A lot of people think that SQL is modern, therefore I should use SQL instead of flat files. I would argue that you waste time doing that. Flat files in general are perfectly fine (although Dikurivatives have a fairly, err, shall we say suboptimal library for dealing with files) for the purposes of saving and loading data. Before converting to SQL, make sure you have use cases beyond just sticking everything into a database.
17 Feb, 2009, Rojan QDel wrote in the 11th comment:
Votes: 0
My goal has always been to use some SQL base (MySQL or SQLite) in order to make a web interface (in PHP) a more realistic endeavor. Not to play the game, but to provide information, etc.

Using SWR, my biggest issue has been converting areas to MySQL,since the use of area filenames is so frequently used in our code.
17 Feb, 2009, elanthis wrote in the 12th comment:
Votes: 0
Quote
I'm stuck having to add it to the code in million places, then i would have to edit every .are file
and i would have to edit the load/save methods as well.


You could probably solve this just by updating your flat file formats to something that isn't ****ing retarded. Nothing stops you from using a more structured, extensible file format.

Most MUD loading/saving code looks something akin to:

/* load */
int foo = read_int();
char* bar = read_string();
float baz = read_float();

/* save */
write_int(foo);
write_string(bar);
write_float(baz);


The output ends up looking something silly like:

123
This is a string~
12.34


Obviously, adding a new field would confuse such code. With a very simple change, you can make the code much more extensible – capable of handling both the addition and removal of fields without error (so long as you don't change the order fields are in in the file):

/* load */
int foo = read_int("foo", 0);
char* bar = read_string("bar", "default");
float baz = read_float("baz", 1.0);

/* save */
write_int("foo", foo);
write_string("bar", bar);
write_float("baz", baz);


The output could look something like:

foo: 1234
bar: This is a string
baz: 12.34


You could also use some standard structured format (e.g. XML), but I don't personally feel that's necessary. XML is harder to edit by hand and XML on its own offers no real benefits unless you plan on using some kind of standard schema for interoperability with other software. As a data format for a single application, XML offers few advantages over a format like the above.

With the new object I/O code, you can then use one of two loading algorithms. The simpler one to do would not be so resilient to removing fields, but deals with added fields perfectly. When reading a field, look at the key name. If it matches, return the parsed value. If it doesn't, return the default (the second parameter). To deal with a removed field, you'd need to keep the read_() call in, but just ignore the result (you may want a read_string_ignore() variant to avoid memory leaks). It can leave your load code a bit cluttered if you're prone to adding/removing fields often, but it will keep working.

The better way, which would probably be equivalent in complexity to using SQL (except for the part where you don't need to manually maintain the SQL schema), is to read in your files in full into a hash/tree structure mapping key names to key values. The individual load routines would remain the same, just the internals would be changed to look up the requested key instead of directly reading from the file. Saving can remain unchanged.

You could eliminate the step of loading the whole file/chunk into memory by using a pull-algorithm for loading, but that would require extensive rewrites of your loading code, and it's pretty painful to deal with without using a lot of macros to glue things together. I wouldn't bother.

Overall, the changes would be much less than converting to SQL. You'd get the biggest advantage of converting to SQL as a storage mechanism, too. Using SQL for complex queries on your data requires keeping the SQL data constantly synchronized with the MUD versus just using it as a storage mechanism, which really requires a massive rewrite of a codebase (you would essentially be rewriting it from scratch). At that point you might as well just write the MUD to use SQL directly for most operations instead of keeping an in-memory tree of data. Plus if the DB is just a storage mechanism then you can't really expect to let your website talk to the SQL DB, because then the website would be querying against stale data. So really, unless you are planning on writing a true SQL-based MUD engine from scratch, doing all the work to use SQL as a storage mechanism is really just not worth the effort, not even remotely.
17 Feb, 2009, JohnnyStarr wrote in the 13th comment:
Votes: 0
Wow, once again, thanks for the help.

I didn't think it would be that hard, but i will def find an alternate way of doing it.

I would have to agree that because the mud is already in sync with the flat files.
It would be easier to create something that edits those files than to try to edit the SQL
and wrap the mud code around that new data structure.
18 Feb, 2009, elanthis wrote in the 14th comment:
Votes: 0
I didn't mean that you should have external software use the flat files. That would be pretty bad.

I meant that the MUD should have an interface that external software communicates with, ideally an HTTP port with a simple JSON/XML RPC mechanism that can easily be accessed through PHP/Perl/Python/ASP/whatever. The same is true even if you do use SQL.

Most people who implement SQL as their storage system end up wanting to use an architecture like:
Website <—> Storage <—> MUD

What I'm saying you should be doing instead is to use whatever storage is easiest, and then use an architecture like:

Website <—> MUD <—> Storage
0.0/14