31 Dec, 2006, Justice wrote in the 1st comment:
Votes: 0
There seems to be alot of discussion about SQL and MUDS lately, figured I'd start a topic on it.

Anyway, was just wondering what uses people have for placing their data into a SQL database. I've got years of experience with several databases (no embedded databases though). My opinion is the databases are too slow for certain types of files, but have acceptable performance for others.

After the builder discussion on this, I've started writing (in my spare time) a SQL export into the mud. When area files are saved, they generate the normal flat text file along with a SQL text file that can be loaded into the database. I've been using the database for a grub-like utility website. The eventual goal is for the mud to generate statistics which will be used to aid in decision making on features, tweaking, etc. Things like help files will also be available to players on the web page. (probably boards too)
31 Dec, 2006, Rojan QDel wrote in the 2nd comment:
Votes: 0
I'm curious how you decided to export the area to SQL. This is one of my main dilemmas in SWRSQL since in the past most area commands and functions are based off of the area filename. So it seems pointless to use this in the DB but I'm having a heard time deciding if I should simply redo the whole method of saving, accessing, and modifying areas (using an id or somesuch instead of filename) or to keep it similar and try to adapt.
31 Dec, 2006, Justice wrote in the 3rd comment:
Votes: 0
I'm using the area filename as a string in the primary records (area_room, area_mobile, area_object, etc). However this is because the mud still uses the flat files, the database is used for analysis and needs to represent what the mud is using.

If I were replacing the flat file system, I would enumerate the existing areas into vnums, and use that. This would increase overall db performance and reduce the data size. I contemplated doing that with my system but since the mud doesn't use them, maintaining it would be a pain.
31 Dec, 2006, Rojan QDel wrote in the 4th comment:
Votes: 0
Well, my thought was to have a table for rooms, a table for mobs, a table for objects, and a table for area data. The problem is, when do you save each? You could save whatever piece of data for each object when it's modified, or you could save it all at once…
01 Jan, 2007, Justice wrote in the 5th comment:
Votes: 0
I have a separate table for each, and a few extra tables to handle things with an n to 1 ratio. In my situation, the entire area exports it's queries into a single text file that I can execute. It deletes old rows and inserts new ones. I do it this way to ensure that new records get added without duplicating existing records. It's not difficult to write the query to verify and then insert or update as necessary.

In your situation, with the mud directly accessing the database, I'd update each record when the value it represents is modified. It will keep the database in synch with the mud and reduce the amount of queries you have to run. Of course, if you have acceptable performance, there's no reason you can't just modify the savearea.

For reference… my export code is still incomplete generates 1041 queries for my hometown (227 rooms, w/ exits, extra-descriptions, progs. 96 objects w/ affects, extra-descriptions, progs. 47 mobiles w/ progs). It currently executes in about 10ms w/ MS SQL Server 2000. Granted… I don't have much data, no indexes, and delete/insert is pretty fast.
03 Jan, 2007, Rojan QDel wrote in the 6th comment:
Votes: 0
Well. The way I've been running SWRSQL so far, when the mud boots up it loads all the db stuff into memory like it normally would with flatfiles. With saving, for ANYTHING, it saves only what was modified. So I suppose I could do the same for areas.
03 Jan, 2007, Metsuro wrote in the 7th comment:
Votes: 0
So if I might ask, why load into memory and not just get the information when its needed? Is there any benefit from either or these?
03 Jan, 2007, kiasyn wrote in the 8th comment:
Votes: 0
speed
03 Jan, 2007, Metsuro wrote in the 9th comment:
Votes: 0
So loading into memory is better speed wise then when needed?
03 Jan, 2007, kiasyn wrote in the 10th comment:
Votes: 0
yes
03 Jan, 2007, Metsuro wrote in the 11th comment:
Votes: 0
Is it possible to set it up, it to load things into memory but if the information in the db changes, to remove whats in memory and reload the new information, or is there some better way to do that?
04 Jan, 2007, Guest wrote in the 12th comment:
Votes: 0
Yes. Loading the contents of the database into memory at bootup is probably the better approach if you're going to store your area files in the db. Trying to "query on demand" with all of the information you'd need would be insane. A moderate sized pbase would be enough to start hurting you on resources.

When data in those areas is altered via OLC or some other expected method, then you would gain a huge advantage by being able to query an update on the specific bit of information to save. Need to change your uber-sword to +200 instead of +300? No problem. Only the sword's data needs to be messed with, and in a case like this, only the sword's plus factor need be updated. The same operation with a flatfile area would require saving the entire area file to disk, along with deleting the previous copy.

As for sensing when information in the DB changes and updating live game objects with that information, yes. It's possible. Is it easy? Probably not. It would also likely negate any advantage to loading everything into memory. But what you could do is when copies of objects which depend on the index data are created, it queries the DB and uses the new stats. Basically like how most muds handle index data now.

And if you set things up right, you could have several running copies of the mud sharing the same data - provided you didn't get carried away and let all of them have a free-for-all with modifications to the data.
04 Jan, 2007, Metsuro wrote in the 13th comment:
Votes: 0
Interesting, I am looking for a way to bring web based building into a MUD so building can be done from slightly more places. As well as a way to access player information better then currently used in smaug by "finger"
04 Jan, 2007, Metsuro wrote in the 14th comment:
Votes: 0
Or would it be better for like a website set a series of varibles like… UpdateArea true, then store somehow what was updated then have some form of like a timer on the mud to check fo an update then like use the information that was updated like some sort of pointer to update just that portion?
05 Jan, 2007, Justice wrote in the 15th comment:
Votes: 0
Metsuro said:
So loading into memory is better speed wise then when needed?


Querying data when needed is slow with a database because each query has a certain amount of associated overhead. Also, in order to retrieve a value, the database must lookup the row using either an index or by scanning the table. Compared to loading in memory where the (using C/C++) the code simply shifts a memory location and reads a certain number of bytes.
05 Jan, 2007, Justice wrote in the 16th comment:
Votes: 0
Metsuro said:
Interesting, I am looking for a way to bring web based building into a MUD so building can be done from slightly more places. As well as a way to access player information better then currently used in smaug by "finger"


There are a variety of ways to handle this. With a little work, it's not hard to make the mud be able to reload an area, or even a single object/room/mobile. The hard part is synchronizing the data. There are a few ways it could be done automatically depending. Some databases support triggers which can kick off a process that alerts the mud to data changes. Additionally you can use a control field on records, or a global control table that the game can query periodically. Finally, you can easily add a manual update so builders can make changes on the database, and request that it be updated.

Each of these methods have their own advantages and disadvantages.
05 Jan, 2007, Tyche wrote in the 17th comment:
Votes: 0
Justice said:
Metsuro said:
So loading into memory is better speed wise then when needed?


Querying data when needed is slow with a database because each query has a certain amount of associated overhead. Also, in order to retrieve a value, the database must lookup the row using either an index or by scanning the table. Compared to loading in memory where the (using C/C++) the code simply shifts a memory location and reads a certain number of bytes.


Well the generalism above ignores that the rdbms discussed already cache. The primary overhead on a table that exists in memory is the parsing of the query, if dynamic/unbound SQL is used, and the execution of the query algorthm itself (which in most cases is as efficient as the algorithms one would implement for one's own memory store).
05 Jan, 2007, Justice wrote in the 18th comment:
Votes: 0
Tyche said:
Well the generalism above ignores that the rdbms discussed already cache. The primary overhead on a table that exists in memory is the parsing of the query, if dynamic/unbound SQL is used, and the execution of the query algorthm itself (which in most cases is as efficient as the algorithms one would implement for one's own memory store).


That's interesting. I'd like to know where you get this information. In 7 years of working with corporate databases, I've found that over 90% of query execution (for selects) is index lookups and table scans. This has held true for tables holding as few as 30 rows or as large as 100million rows. At least according to my experience and backed up by the profiling tools I've been using.

There are many different ways to optimize database access depending on what you're doing. Given the circumstances, I'd rather have my data in-memory than have to request it from a database whenever you need to check or update something. Given the "query on demand", the cache would actually be of limited usefulness. This is because during a single execution pulse, the mud will access the majority of it's data. Assuming the cache was large enough, it would at best remove disk access overhead from the database reading data.

Additionally, this does not take updates into account. Databases store their information on disk, not in memory. So when you update a value, it requires a disk access. You can't honestly expect me to believe that network communication, query parsing, index lookup/table scan, and disk write overhead is faster than changing a value in a struct?

On a side note, this also doesn't take into account the client-library overhead that may exist. Now, for accessing and searching a large amount of data… that's a very very different circumstance. In which case I would agree with you that the database has better performance than the algorithms that the average developer would use.
07 Jan, 2007, Kelvin wrote in the 19th comment:
Votes: 0
It may be a bit premature to announce this as it's not really where I want it to be right now, but I've been hacking on a Python+SQL based MU* server for the last few months. I wanted to have something that was really easy to develop and work with, yet completely accessible to web and outside integration. I settled with a combination of Python and Django. Django is a web framework that also includes other things like database abstraction (The codebase runs on SQLite, MySQL, Postgres, and soon Oracle once the Django driver is released). It has an automatically generated admin interface that already would theoretically let you do a lot of building from a browser, and there is no game-handled caching of objects so everything is updated/accessed in real-time.

Granted I haven't had a chance to test the codebase under load, this is a grand experiment to see just how much I can throw at a few different RDBMs without the typical memory-based approach. As mentioned in previous posts, many RDBMs have caching, and Django does some things to help with that, so some of the common queries are very fast. With a light-weight RDBM like SQLite, I've found no noticable overhead when I configure it to load the database into memory from SQLite's end (no additional work on my part aside from adjusting some configuration). The Google Group is at http://groups-beta.google.com/group/even... for the project, and there's a Google Code link on there to where the repository is. It's slightly annoying to install right now and I haven't wrote good instructions for it since it's not really ready for public use, but if you want a demo, PM/Email me and I'll be glad to show you.

Now that I'm done with my shameless plug, yes, from what I've done so far, a completely SQL-based setup with minimal caching looks to be great for the vast majority of games out there. I won't know how well this thing scales up until later, but I haven't seen much reason to worry just yet.
0.0/19