23 May, 2014, syn wrote in the 1st comment:
Votes: 0
Hello Folks..

Wasnt exactly sure where to post this, but as the base I am working on is in C figured this was as good a place as any.

So being 1, a hobbyist, and 2, newer to SQL in general Ive been adding SQL support into my codebase.

Currently I have some channels, logs, and the help system setup in a dedicated MySQL server in my home. I have been able to figure out the base connections to MySQL, insert data, select data, and all is well.. Except I feel like I did it probably very inefficiently. They currently work without any issue, no memory leaks or problems according to extended valgrind (just to be sure) and generally I haven't been able to crash them (after adding in a fair bit of word/character scanning, which I also feel is probably a waste).. The character scanning mostly for channel logs, to avoid ' doing bad things to the insert statement, so it will actually insert, and be recallable for the history command.

For one, I did not know about stored procedures, and how to use or enact them. For that matter I am not sure whether it would be worth it for me to go down that road to begin with, but I rather dont like having a bunch of hard set SQL statements floating around the code, should I need or want to change any, and what kind of cascade I may end up with if so..

Anyway, any thoughts, ideas, or dissuading with good reason is appreciated.
23 May, 2014, quixadhal wrote in the 2nd comment:
Votes: 0
I have no idea what kind of stored procedures (if any) MySQL supports. PostgreSQL supports multiple languages, including their own SQL-based language PgSQL, (like Tsql from Microsoft, or whatever Oracle calls theirs these days), and you can also add perl, python, ruby, and probably a few other languages as "untrusted" things (meaning you need dba privs to use them).

As a couple of examples I've used:

CREATE FUNCTION fn_b64decode(text) RETURNS text
AS '
use MIME::Base64;

my $b64 = $_[0];
my $thing = decode_base64($b64);

return $thing;
'
LANGUAGE plperlu;


CREATE FUNCTION fn_ansi2png(text) RETURNS text
AS '
use Image::ANSI;
use MIME::Base64;

my $login = decode_base64($_[0]);
return undef if ! $login;
my $img = Image::ANSI->new( string => $login );
return undef if ! $img;
my %opts = ( mode => "full", font => undef, palette => undef );
my $png = $img->as_png( %opts );
return undef if ! $png;
my $b64 = encode_base64($png);

return $b64;
'
LANGUAGE plperlu;


CREATE FUNCTION fn_update_login() RETURNS "trigger"
AS '
BEGIN
new.html_login = fn_ansi2html(fn_b64decode(new.ansi_login));
new.png_login = fn_ansi2png(fn_b64decode(new.ansi_login));
RETURN new;
END;'
LANGUAGE plpgsql;


CREATE TRIGGER trg_update_login
BEFORE INSERT OR UPDATE ON mudlist
FOR EACH ROW
EXECUTE PROCEDURE fn_update_login();


What that does…. when I insert a row, I pass in a base64 encoded copy of the ANSI login screen (this is from my mudlist code). I encode it to avoid issues with stupid unicode sequences, as some ANSI escapes look like malformed unicode. As the row is being inserted or updated, the trigger I defined at the bottom runs the update_login function, which intercepts the data being updated and modifies it right before it gets stored. That stored procedure calls a couple other stored procedures in perl to convert the ANSI text into HTML and a PNG graphic.

While I certainly could have done that conversion in the actual web page code, I felt it was better to move that processing to the database server, where it won't directly affect the user. It also has the added benefit that if I later write code to update the raw ANSI data, it would also rerender the HTML and PNG versions without me having to remember to do it.

That's usually what stored procedures are used for…

If you just want to make shorthand queries that you can change in the database, instead of in your code… you might want a view instead of a stored procedure.
23 May, 2014, plamzi wrote in the 3rd comment:
Votes: 0
syn said:
For one, I did not know about stored procedures, and how to use or enact them. For that matter I am not sure whether it would be worth it for me to go down that road to begin with, but I rather dont like having a bunch of hard set SQL statements floating around the code, should I need or want to change any, and what kind of cascade I may end up with if so..


There's no easy answer. You may find some things easier to do using stored procedures. You may find that, depending on how your code handles the output of the stored procedure, it would be unsafe to make certain changes to it without recompiling the code.

My experience with MySQL stored procs is that they only make certain things easier. For example, they're not very easy on the eye when they have too many variables, or have to generate SQL dynamically.

Avoiding hardcoded SQL in your code seems to be a separate issue. One thing you can do is store your statements in a SQL database table, where they'd be more organized, and where you can potentially modify some of them there without needing to recompile. However, this would introduce a different kind of hassle (you'd have to have a window open to your database everytime you touch code related to it). You can also organize your statements in a lookup C struct in a separate file, etc. That's good practice.
23 May, 2014, syn wrote in the 4th comment:
Votes: 0
Thats a very awesome example, but thus far crazy beyond the scope of any example ive seen in relation to MySQL.

For instance here is a, comparatively, ridiculously simple "Stored Procedure" based off some examples:

DELIMITER $$

CREATE DEFINER=`root`@`localhost` PROCEDURE `find_help`(IN skeyword VARCHAR(255))
BEGIN
SELECT entry_num, level, keywords, text FROM bd_help WHERE keywords=skeyword;
END

Most examples I see are basically variations of that type of thing, some much more in depth with a lot of joins and whatnot but yeah.. anyway..

So thats interesting to see as a difference possibly, or my not looking in the right places. I picked MySQL originally because i had sort of used it in the past, as I am so early in retooling would it make sense to go for something like Postgre? I don't know any inherent plusses for one over the other, if any. So at this point I could be pretty wide open to any system
23 May, 2014, syn wrote in the 5th comment:
Votes: 0
plamzi said:
syn said:
For one, I did not know about stored procedures, and how to use or enact them. For that matter I am not sure whether it would be worth it for me to go down that road to begin with, but I rather dont like having a bunch of hard set SQL statements floating around the code, should I need or want to change any, and what kind of cascade I may end up with if so..


There's no easy answer. You may find some things easier to do using stored procedures. You may find that, depending on how your code handles the output of the stored procedure, it would be unsafe to make certain changes to it without recompiling the code.

My experience with MySQL stored procs is that they only make certain things easier. For example, they're not very easy on the eye when they have too many variables, or have to generate SQL dynamically.

Avoiding hardcoded SQL in your code seems to be a separate issue. One thing you can do is store your statements in a SQL database table, where they'd be more organized, and where you can potentially modify some of them there without needing to recompile. However, this would introduce a different kind of hassle (you'd have to have a window open to your database everytime you touch code related to it). You can also organize your statements in a lookup C struct in a separate file, etc. That's good practice.


Oh I am struggling to even accurately describe what I am looking for, and being on the support end of various technologies realize this is akin to asking whats wrong with my computer, or which one is best. Thats a loaded and nebulous question..

Be that as it may, I suppose, mainly I am seeking input on perhaps how people do it now, and/or examples of what might be possible between MySQL/C or SomeDBMS/C.

For instance your suggestion of storing the statements in a struct is great, I have no idea why I didnt think of it, but I hadnt. Even that would be a lot nicer to deal with as they would effectively be in one spot and require editing just the 'template' statement once vs however many times ive needed it in the code (changing whats read/output is clearly another matter).

The documentation as far as I have read in regards to MySQL, or my understanding (which could be completely wrong) is that the procedure is stored on the server, the initial message 'triggers' the proc which expands within the server, does whatever it is doing within the server, and sends back the data. Then, with MySQL, you need to step through the results somewhat differently than normal (i just had the example/doc up but dont at the moment so forgive how vague this all is)..

I would much rather have any massive search/pull really be done at the SQL server than the MUD, which is what I understand one of the benefits to be as people have mentioned in some MySQL forums. Best practices, or real examples of it in use next to C, not a lot of that, I found a couple but they differ in approach without really explaining why.

So, to surmise, thanks for the input thus far, and I would love any more, or if anyone has any direct experience an example if you would be so kind. Otherwise have a great day :)
23 May, 2014, plamzi wrote in the 6th comment:
Votes: 0
syn said:
For instance here is a, comparatively, ridiculously simple "Stored Procedure" based off some examples:

DELIMITER $$

CREATE DEFINER=`root`@`localhost` PROCEDURE `find_help`(IN skeyword VARCHAR(255))
BEGIN
SELECT entry_num, level, keywords, text FROM bd_help WHERE keywords=skeyword;
END

Most examples I see are basically variations of that type of thing, some much more in depth with a lot of joins and whatnot but yeah.. anyway..


Something as simple as a select statement, even with lots of joins, should probably just be a "view". And if it's not used very frequently, it may not even deserve to be a view. Use procedures for stuff that needs several queries, or dynamically generated queries, or temporary tables, to produce a result, or more than one set of results. If you can't think of a use case like that for now, just don't use procedures :)

syn said:
I would much rather have any massive search/pull really be done at the SQL server than the MUD, which is what I understand one of the benefits to be as people have mentioned in some MySQL forums.


One thing you have to understand is that the official C API for MySQL is blocking. It doesn't matter whether your massive queries are run inside a stored procedure, or the exact same queries are run by your code. In either case, your code will sit there waiting for the result.

There's an unofficial non-blocking API, which I haven't used so don't know if it's any good:
http://forums.mysql.com/read.php?168,255...

If you want a non-blocking database connection, your choice of C and probably a codebase with a retro blocking nature may make that difficult. If you are stuck with your particular codebase, though, you can do a whole lot by simply not doing massive search/pull queries from your game code. For example, we have a scheduled daily backup, which is kicked off by a MySQL event process, and doesn't affect the server in any way. Likewise, you can prep any data periodically from another process for your game server to just select from.
24 May, 2014, quixadhal wrote in the 7th comment:
Votes: 0
Another option, which is used by the Discworld mudlib for FluffOS (LPMUD) is to write a small database script in python, ruby, perl, whatever… which talks directly to the database and listens on a socket. Then your MUD talks to this script via a socket, just like any other network connection. As a result, you basically send your query to the script and pick up the results the next time through the main loop (or however often you feel like polling it).

If you do it that way, you can isolate the actual SQL into the easy-to-modify script, and just send shorthand commands to it, almost like a mini language.

C codebase sends "load player Foobar" to the script over a socket. Script does the SQL needed to fetch the player data for Foobar and sends it to the MUD over the socket. C codebase sees activity on the socket and fetches the data and then does whatever with it.
24 May, 2014, syn wrote in the 8th comment:
Votes: 0
Actually the codebase is non blocking, I wouldn't honestly have known unless I hadn't caused a bug by purposely removing the write_to_buffer overflow protections to see what the socket code would do. Turns out it returns an EAGAIN (signaling non-blocking sockets) and another portion of the code then hard drops the socket. So I discovered a couple things there that were handy, and as a protection against overloading the socket writer I added a micro queue to it such that it will wait X microseconds try again, wait etc until the socket is not data locked, then push the rest through. So far testing in that scenario has been excellent, though you would have to do some crazy stuff to accomplish it with buffer protection in place. I felt better knowing that the socket could at least handle a little bottleneck without at least just dropping the socket altogether. Annnyway back on topic..

Be that as it may, I was unaware that effectively I was at a cross purpose between the MySQL API and the Codebase as far as that is concerned.

My lack of clarity is surely making me look quite foolish, but alas. My above example was more of a proof of concept in general, in reality -down the line- as it were, I wish to add player data, accounts, and race/classing systems into the database. In that scenario I was hoping that I could offload the work as it were from the MUD. If that particular point is irrelevant in the grand scheme, thats fine. I just wanted to be as efficient in both arenas as possible. As in, if I can have the SQL server do as much as possible, why then wouldn't I?

I looked at what I could find from MudBytes as far as codebases that had SQL integration and didnt garner much beyond a lot of hard coded selects/loads which, perhaps theres not a lot of downloadable stuff with examples, or I didnt find it.. but what concerned me was..

Well one of the earlier versions of socketmud for instance.

It had a help integration system, but it did something that to me seemed ludicrous. To find an individual helpfile it did a SELECT * FROM vanda_help

With only it seemed about 10 helpfiles, sure, no big deal, but with even 100, 200, and for well documented MUDs hundreds of helpfiles.. thats insane, to me. To load that amount of data into the mud to then scan through it?

Of course I didnt go with this option, though I did try it to see how badly it would do, mainly it just segfaulted with bad read/write errors/OOB etc. So I went with a targeted where/like search instead.

My point is, that I had hoped there would be some good examples in the wild to look at that made use of more advanced features that I can only contemplate as possibly existing, my lack of DBA type knowledge or experience limits what I understand of what could and or is possible with the database, and I want to maximize what I am doing, or learning, and not just fumble around with a weak attempt, doubling/trippling the work/memory use/what have you, making likely as insecure and fragile a system as the flatfiles I am trying to get away from (in my opinion).

In the past couple of hours I found this very interesting read on creating a MySQL client with the C API (in c code, obviously) and it goes over a lot of seemingly useful information, so I have begun reading through this.

MySQL Client Application Program in C/C-...

Apologies for the rambling, and hopefully this is has made enough sense to illustrate a bit more of what my knowledge goals are, if not specific 'code' goals.
24 May, 2014, syn wrote in the 9th comment:
Votes: 0
quixadhal said:
Another option, which is used by the Discworld mudlib for FluffOS (LPMUD) is to write a small database script in python, ruby, perl, whatever… which talks directly to the database and listens on a socket. Then your MUD talks to this script via a socket, just like any other network connection. As a result, you basically send your query to the script and pick up the results the next time through the main loop (or however often you feel like polling it).

If you do it that way, you can isolate the actual SQL into the easy-to-modify script, and just send shorthand commands to it, almost like a mini language.

C codebase sends "load player Foobar" to the script over a socket. Script does the SQL needed to fetch the player data for Foobar and sends it to the MUD over the socket. C codebase sees activity on the socket and fetches the data and then does whatever with it.


That is definitely something I would want to investigate. I may be dead wrong but I just feel silly hard coding in any select statement/insert/delete/et al - explicit or otherwise, and if I add a new field, or change some data type, or whatever it is, then having to go back and recompile after n changes to the code. Again clearly if I drastically change the data, at some point I need to deal with reading and using that data, but at least to throw the data, and the initial selection of said data.. it just feels like using a sharpie when I could use a perfectly fine erasble pencil instead..
24 May, 2014, plamzi wrote in the 10th comment:
Votes: 0
syn said:
Actually the codebase is non blocking, I wouldn't honestly have known unless I hadn't caused a bug by purposely removing the write_to_buffer overflow protections to see what the socket code would do. Turns out it returns an EAGAIN (signaling non-blocking sockets) and another portion of the code then hard drops the socket.


Polling sockets and a few other things can be done easily in C in a non-blocking manner, and have been. That is not going to affect the behavior of your connection to the MySQL db. It will be one connection, blocking. A real non-blocking codebase is either multi-threaded, or event-based (registering callbacks for when a process/worker is done). I can't think of a single codebase in C that is built that way.

The external script / daemon design is kind of interesting. It performs the same basic function I discussed earlier–that of prepping data for you to poll. But unless you get your hands on ready-made code, or are very comfortable in the right language, that's another moving piece for you to code and maintain. So, when all is said and done, decide on what you really need the thing to do, and build no more than what you need. Don't try to solve problems before they actually materialize. Perfection is the enemy of success.
24 May, 2014, syn wrote in the 11th comment:
Votes: 0
How the rest of the codebase operates is clearly not set up actually utilize a non-blocking system, however, the actual socket portion isn't blocking which i found interesting, in general. Moot point, clearly.

At any rate, absolutely valid point. Jumping the gun into over design is as foolish as not planning at all. This is a main reason I had wanted to come and feel out all of this amongst those of you more familiar.

If moving the brunt of the SQL calls out of C was possible, and easy, great. If its not very easy, and as you point out becomes another failure point, then it would be silly.

My main concerns were - given I am using C and cannot just ditch it:

1, Is there a more optimal, or 'best' approach to use than housing, and hardcoding everything in C directly
2, If so, what kind of time cost would it be to learn/implement
3, Is C/C-API somehow flawed, due to age or just some quirk, that I am unaware of

If the answers are
1, Possibly if you know X and can do Y and are very good with Z then the likely answer for me is probably not
2, Given 1, it sounds like a significant investment. I have no problem with this, but in the interest of actually making progress that may be a back burner issue as I simply get it all working now
3, Beyond it being blocking, thus far I havent found any issues or seen any, and neither of you (granted Quix's lack of experience) have mentioned any so..

Thank you both for all the input it has been quite illuminating, and I would/do love to hear any more you have.
24 May, 2014, syn wrote in the 12th comment:
Votes: 0
Its funny reflecting on all of this, my younger self out of righteous 'perfection' would likely have ignored all of that and said nay, the best I will learn and make it so!

Now.. I would rather make something that works, even if its somewhat less efficient, as long as it is stable, and gets point a to point b. If I really want to, I can learn it and implement it when I actually do know what the hell it is. Shoe horning a system I have no experience or knowledge with is stupid, and I am glad I am finally smarter than my pride would have let me be years ago.

-Mini self soapbox over.
24 May, 2014, plamzi wrote in the 13th comment:
Votes: 0
Here's one of several threads that may be good for you to browse:
http://www.mudbytes.net/index.php?a=topi...

Having done the work to store pretty much everything persistent from my C code in standalone MySQL, I am obviously of the persuasion that it's time well spent. But I would not fool anyone that it was easy or quick. Some parts, like storing and reading the world files, were relatively easy and quick to do. Other parts, like storing player equipment e. g., would have been quicker to do using an ORM (I'm not sure what ORM options there are for pure C, probably slim pickings if any). All parts of it were pretty tedious (as in, the game was not really gaining any new features, it was just being retooled to make new features easier / more feasible). So you really have to have a clear goal first, and then figure out the best way for you to get there. Don't do it just for the sake of doing it (unless the goal is to just learn, I guess).
24 May, 2014, syn wrote in the 14th comment:
Votes: 0
Thanks for the link.

My goal is partially to learn and partially because I finally have the time and ability to implement what I have thus far learned. The game is not open, and may never be, I believe it will but time will tell.

Ultimately I would like all data that isnt strictly PC related in SQL, that much at least I know at this point I can do without much trouble. Tedious, yes, but not overly difficult thus far. Actually I have found things a lot more stable and clear cut since starting this venture. I have moved several items over to SQL, including a help system I made, and had a few friends bang around on it, valgrind, all the good stuff. It's been rock solid. From that perspective I am thrilled, and I have learned a lot about SQL, which is fun to me.

Sooo plod on I will :D
24 May, 2014, syn wrote in the 15th comment:
Votes: 0
I forgot, insanely, but the biggest thing I love about moving items into SQL is "instantaneous" data insertion, retrieval, and modification without having to bugger the mud, hope some stupid old txt file doesnt break, or the code vomit, or worry about an fpread.. you know how it goes.

I LOVE that about the SQL integration so far. Even the seemingly small thing of quering various logs to create a sort of bug or crash 'picture' in any way I want with a free form in game SELECT command and output processor when I happen to break something or we discover some crazy bug somewhere, is awesome.
24 May, 2014, syn wrote in the 16th comment:
Votes: 0
Well I read through that thread.. it's sad what it turned into, I was hoping that there would have been an amazing code dump of your project.

Figures, people bicker for no reason.
24 May, 2014, plamzi wrote in the 17th comment:
Votes: 0
Since you're interested in improving productivity, check out this thread as well.

http://www.mudbytes.net/topic-3740

The above thread has the basic code needed, because it's fairly compact and generic. I didn't feel like posting any of the MySQL code would be especially useful, even if it was wanted (which it wasn't :).
24 May, 2014, quixadhal wrote in the 18th comment:
Votes: 0
Just in case anyone is interested… this is the old Discworld mysql handler in python. It may not work anymore, and like much in the MUD world, it was used by them for their own purposes so don't expect fancy robust error checking. But, it's one example.

#!/usr/bin/python

import MySQLdb
import string
import types
import SocketServer
import sys
import _mysql

class InputString:
def __init__(self, str, pos):
self.pos = pos
self.str = str
def current(self):
return self.str[self.pos]
def forwards(self):
self.pos = self.pos + 1
def backwards(self):
self.pos = self.pos + 1
def eos(self):
return self.pos >= len(self.str)

class LPCRequest(SocketServer.StreamRequestHandler):
def handle(self):
#
# Ok, I think what we do here is read the stream for a while…
#
while 1:
fluff = self.rfile.readline()
if len(fluff) > 5:
fluff = python_fromlpc(fluff)
try:
bing = doquery(self.server.connection, fluff[3], fluff[1], fluff[2], fluff[4])
results = bing.fetchall()
self.wfile.write(makelpcstring_type([fluff[0], 0, results]))
bing.close()
except IOError, (errno, strerror):
self.wfile.write(makelpcstring_type([fluff[0], 1, "I/O error(%s): %s" % (errno, strerror)]))
#except SocketServer.ProgrammingError:
# print "Programming Error."
#self.wfile.write(makelpcstring_type([fluff[0], 1, "Programming error: "]))
#self.wfile.write(makelpcstring_type([fluff[0], 1, "Programming error: " + detail]))
except TypeError:
self.wfile.write(makelpcstring_type([fluff[0], 2, "Type error."]))
except _mysql.OperationalError, (errno, details):
self.wfile.write(makelpcstring_type([fluff[0], errno, details]))
except:
self.wfile.write(makelpcstring_type([fluff[0], 1, "Another error." + str(sys.exc_info()[0])]))

class LPCServer(SocketServer.TCPServer):
def __init__(self, addr, type, conn):
SocketServer.TCPServer.__init__(self, addr, type)
self.connection = conn

class CurrentConnection:
db = 'errors'
user = 'atuin'
passwd = ''
connection = MySQLdb.connect(host='127.0.0.1', user='atuin', db='errors', port=3306, passwd='')

def connectTo(connect, db, user, passwd=''):
if connect.db != db or connect.user != user or connect.passwd != passwd:
connect.connection.close()
connect.connection = MySQLdb.connect(host='127.0.0.1', user=user, db=db, port=3306, passwd=passwd)
connect.passwd = passwd
connect.user = user
connect.db = db

def doquery(conn, db, user, passwd, query):
connectTo(conn, db, user, passwd)
fluff = MySQLdb.DictCursor(conn.connection)
fluff.execute(query)
return fluff

def makelpcstring_float(bing):
return fpformat.sci(bing, 10)

def makelpcstring_int(bing):
return str(bing)

def makelpcstring_string(bing):
bing = string.replace(bing, '\r\n', '\r')
return '"' + string.replace(bing, '"', '\\"') + '"'

def makelpcstring_dict(bing):
ret = '(['
for kw in bing.keys():
ret = ret + makelpcstring_type(kw)
ret = ret + ':'
ret = ret + makelpcstring_type(bing[kw])
ret = ret + ','
ret = ret + '])'
return ret;

def makelpcstring_type(row):
if type(row) is types.IntType or type(row) is types.LongType:
ret = makelpcstring_int(row)
elif type(row) is types.FloatType:
ret = makelpcstring_float(row)
elif type(row) is types.StringType:
ret = makelpcstring_string(row)
elif type(row) is types.ListType:
ret = makelpcstring_array(row)
elif type(row) is types.DictType:
ret = makelpcstring_dict(row)
else:
ret = '"broken"'
print 'Broken type', type(row), types.IntType,
return ret

def makelpcstring_array(arr):
ret = '({'
for row in arr:
ret = ret + makelpcstring_type(row)
ret = ret + ','
ret = ret + '})'
return ret;

def python_fromlpc(input):
"Changes the input string into a real python type."
fluff = InputString(input, 0)
return python_fromlpctype(fluff)

def pythonnum_fromlpc(input):
start = input.pos
input.forwards()
while not input.eos() and input.current() >= '0' and input.current() <= '9' or input.current() == '.':
input.forwards()
input.backwards()
return string.atoi(input.str[start:input.pos - 1])

def pythonstring_fromlpc(input):
start = input.pos
skip = 0
while not input.eos() and (input.current() != '"' or skip):
if input.current() == '\\':
skip = 1
else:
skip = 0
input.forwards()
ret = input.str[start:input.pos]
ret = string.replace(ret, '\\n', '\n')
ret = string.replace(ret, '\\"', '"')
input.forwards()
return ret

def pythonlist_fromlpc(input):
ret = []
while input.current() != '}':
ret.append(python_fromlpctype(input))
if input.current() == ',':
input.forwards()
if input.current() == '}':
input.forwards()
input.forwards()
return ret

def pythondict_fromlpc(input):
ret = {}
while input.current() != ']':
key = python_fromlpctype(input)
if input.current() == ':':
input.forwards()
ret[key] = python_fromlpctype(input)
if input.current() == ',':
input.forwards()
if input.current() == ']':
input.forwards()
input.forwards()
return ret

def python_fromlpctype(input):
if (input.current() >= '0' and input.current() <= '9') or input.current() == '-':
return pythonnum_fromlpc(input)
elif input.current() == '(':
input.forwards()
if input.current() == ' = python_fromlpctype(input)
if input.current() == ',':
input.forwards()
if input.current() == ']':
input.forwards()
input.forwards()
return ret

def python_fromlpctype(input):
if (input.current() >= '0' and input.current() <= '9') or input.current() == '-':
return pythonnum_fromlpc(input)
elif input.current() == '(':
input.forwards()
if input.current() == '[':
input.forwards()
return pythondict_fromlpc(input)
elif input.current() == '{':
input.forwards()
return pythonlist_fromlpc(input)
elif input.current() == '\"':
input.forwards()
return pythonstring_fromlpc(input)
return None

#result = doquery('errors', 'atuin', '', 'select * from comments where commenter=\'pinkfish\'');

#fluff = makelpcstring_array(result.fetchall())

#print python_fromlpc(fluff)

#
# Now we go into a loop waiting for someone to say something
#
while 1:
fluff = LPCServer(('localhost',7865), LPCRequest, CurrentConnection())
fluff.serve_forever()
[/code]

One thing to note… this is for an LPMUD. A good chunk of the code above is to build and disassemble LPC data chunks, which is what the LPMUD talks with over the socket in question (a "mudmode" socket, which is just TCP but it expects and sends LPC datagrams). If you're building this for your own codebase, you won't need all that.
29 May, 2014, syn wrote in the 19th comment:
Votes: 0
I just wanted to thank you both again for the information and thoughts. The .so libs and not needing to do a copyover when changing a function (or whatever) is particularly nice.

Thus far I have kind of put getting some kind of external medium up to bypass the blocking c api on hold. I think eventually it would be beneficial and the best way to go, but at this point would serve little purpose other than to get it done now.

I have, in general, been learning a lot more about using SQL in the code, and thats likely for the better. I like having a complete understanding of what it can do end to end, versus not knowing and wondering what it all would look like from the server end. In the meantime ive begun adding in interceptors to take data structures that are loaded into MUD memory and save the data 'records' for lack of a better term to SQL. I realize now how trivial that is but I hadnt even thought of it at first.. nearly 100 manual copy/format/pastes of help entries in and I nearly punched myself for being so silly. 3 hours in and after the lightbulb.. then just a few minutes to write a function and 3 seconds to run it saving 4 or 500 helpfiles to SQL.. yah.. silly.

One thing I am curious about, in general, is (mayhap a semantic debate, or not hence the question)..

Is it 'better' or preferred to let most of the data sit on the server and be called directly OR to load most (certain) data into a structure of memory within the MUD and then modify the MUD data, and save that down (or reverse that, save to SQL and then reload the memory instance-s)?

I have a feeling that question is as ambiguous as my first question and entirely subjective to context, but perhaps there is a general best practice, or something I do not know in that regards.

For instance, I took a split approach with a function I just finished up (which likely is not remotely elegant, but it works without leaks or crashes so I am currently happy).

The command is for a logging project 'message' board almost. You can create a 'project' add data to it, set types, completions, etc.

This project entity is a structure within the MUD, I take the saved data, and load it from SQL into the struct on load_projects at boot up.
Changes to the data for Project (parent structure) are made to the memory instance, and then saved to SQL. As this is not a particularly critical system, I figured potentially losing data if there was a crash or something was an acceptable risk.

Secondary to this system is a log system to create a log which associates to a project. Likely through my inexperience I had a hell of a time getting the association to work from the MUD end with them both acting as structs.
As I got near the end, and having re-written the log portion I see somewhat of where I likely could have solved my own experience issues, but that is another matter.

I ended up eschewing the log as a struct and left it as just a data entity on the DB. All polls, and writes are done from the MUD to SQL directly. Of course up front there is no current speed enhancement, or anything like that, but, it was much easier to integrate a log scheme into the projects, for me, in this manner.

Given this example, beyond whether it works or not, as it does, is this practice a 'bad habit'?

EG, A character structure for instance:

There are core parts of a character structure which are used constantly, and would likely be idiotic to not have as a hot item within the MUDs own memory space, leaving it on SQL would make a countless stream of requests which in the blocking C API would likely be bad.

But, the portions of a char structure that are not used very frequently.. would it be a bad idea to split those off and remove them from general loading, and only deal with them from and to SQL directly?

Say, We have name, race, skill list, sex, questflag1, progflag1, weight

name, race, skill list, and questflag 1 are (in this example) used frequently
the remaining are used remotely ever

Would it make sense to split those off of the character structure directly, and call them only as needed, or because of how minimal a MUD is in todays realm of technology, is that entirely moot anyway?

It just feels wasteful to have such (in the case of this codebase) such a ridiculously huge char data structure and perhaps only half of it is regularly called. I figure with a proper utility setup you could simply distill calls to obtain the data relatively easily to something along the lines of progflag1 = get_char_data(ch, progflag1); so that shouldnt be an issue beyond the original backend setup and then making sure the code is appropriately updated for the required calls..

Am I overthinking/being naive?

Again thanks, regardless of the answer(s).
29 May, 2014, quixadhal wrote in the 20th comment:
Votes: 0
The question you need to ask is, what are you timing constraints?

Every time you query a database, it takes time for the database to assemble the result set, and time for you to pull it back across the connection, and then (maybe) time to parse the results to put into the structures the rest of your code uses. An API that allows direct binding of result set columns to structure fields moves part of that off of you and into the fetching part, but it's still non-zero time.

Now, a query to an SQL server, even on the same machine, is an order of magnitude slower than a direct memory access. If your SQL server is on a machine elsewhere on the LAN, it's another order of magnitude. While a text MUD doesn't take nearly as much resources as a modern game, you still have to deal with latency as a real constraint that you can't get away from.

So, how much latency is acceptable? When a user types a command and sends it to the game, they expect to see some kind of feedback. Typically, we've always considered 200ms as the maximum value people are willing to accept for your game to not seem "laggy".

As a result, it's not unusual for the main loop of a game driver to be written so that it does whatever it needs to do and then sleeps for 200 - (however-long-it-took) ms. If that number is <= 0, you've taken too long and missed a tick, which the players will perceive as "lag".

With that in mind, how long will it take to load something from your database? If you're stuck with a synchronus API, that's critical. A query that seems nice and fast to you, might take 50ms round-trip from the time you send it to the time the results are back. That just used up 1/4 of your entire processing time for this tick!

Oh, but asynchronus will solve that right? No, not really. All it does is push the problem up a level. If the data you need to continue processing isn't in memory, it doesn't matter if the query blocks or not… you still can't continue processing that event until you have the data. For some things, pushing it off until the next tick is fine. Loading a player when they log in, even loading an NPC might be OK… but if you start breaking up your objects so parts of them are loaded and parts are not, you'll likely find out that you are CONSTANTLY having to wait on various things, because something needs that data.

MUD's, by their nature, don't scale well because their data is highly coupled. The character data ties into the room they're in, which ties into all the npc's and objects in it, which ties to all their equipment and inventory. It might also tie into adjacent rooms and everything in them. A single event in a room might require hundreds of objects to be loaded and queried to be able to inform the user. Now you're looking at hundreds of 50ms queries, and even if you try to push them off to subsequent ticks, you're still going to end up with a delay that's quite obvious.

The other difference is that a graphical MMO can delay loading of things by nature of the visuals. Players accept "pop-ins" where objects spring into existence 50 feet away, even though they can see the landscape for 2000 feet. It's acceptable to have mountains load gradually, so the player sees just the raw landscape until they get closer and the trees or rivers load. In a text game, you can't do that. Everything that is being shown has to be loaded, because you generate the entire description at once.

You may think, at the moment, that you know what will and won't be used frequently. That will change when you start writing the actual game part of your system. In your example, you think nobody will use "sex" or "weight" very often. How about when somebody codes a tribe of amazons who are KOS to males? Now every NPC will need to look at your gender to decide if they should attack or not. How about when you code a rope bridge that has a weight limit? Or a mount? Or are using weight as a factor in determining the success of bashing a door down?

One extreme of this process are to keep everything in the database and only load it as you need it. The other is to load everything into memory and only write to the database when you change something. The middle ground is where the real artwork comes into play. A good DBA who is also a good programmer can do wonders with developing caching routines to make database activity seem seamless. Odds are, if you are such a person, you do this for a living and already know more than I'm telling you. :)

Remember… this isn't 1990. Your MUD isn't World of Warcraft. However big you think it is, it will fit in entirely in memory on even a cheap machine… my own $300 server has 16G of RAM. Make stuff that works, then worry about how elegant it is or isn't.
0.0/47