03 Sep, 2009, Silenus wrote in the 1st comment:
Votes: 0
I was looking over a textbook chapter on generational garbage collection and thought it would be an interesting exercise to attempt to implement one and try to install it in either my half finished runtime system i have constructed or an existing Lpmud server. I am curious when it comes to low level optimization of parameters and optimizing w.r.t. implementation how challenging it becomes and what methods do you use when doing this sort of work. Obviously the textbook which gives the algorithm outline probably doesnt tell the whole story :P.
03 Sep, 2009, Kintar wrote in the 2nd comment:
Votes: 0
I don't have anything helpful to add, unfortunately. Everything I know about generational GC comes from reading the specs on various VMs. However, I would like to state that you are obviously a masochist, and I wish you every happiness in your self-inflicted torture. ;)
03 Sep, 2009, Silenus wrote in the 3rd comment:
Votes: 0
Heh. Thanks :P
03 Sep, 2009, Ssolvarain wrote in the 4th comment:
Votes: 0
This work?
03 Sep, 2009, Silenus wrote in the 5th comment:
Votes: 0
Is it generational? :P
03 Sep, 2009, shasarak wrote in the 6th comment:
Votes: 0
Do some googling for "garbage collection", "smalltalk" and "john mcintosh".

There isn't really all that much you can tweak, though, beyond the size of newspace and how much oldspace increases by each time it has to. Getting the size of newspace right is the tough part - if you make it too large then there's no point in having a generational system, but if you make it too small then lots of objects get de-referenced just after they get tenured to oldspace, which is extremely wasteful.
04 Sep, 2009, David Haley wrote in the 7th comment:
Votes: 0
There are tons of things to tweak in almost any garbage collected system. A very simple example is the frequency of checks and collection. Tuning garbage collectors is perhaps one of the most important things in improving the performance of low-latency, high-performance applications in managed languages.

Lua also has a generational garbage collector, and I think their site references papers regarding its implementation.
04 Sep, 2009, Silenus wrote in the 8th comment:
Votes: 0
Is such fine tuning an art? Can you acquire the skills for it in some manner other than trial and error?
04 Sep, 2009, David Haley wrote in the 9th comment:
Votes: 0
As with almost any form of practical computer science, it involves a mixture of theory, art, and an understanding of practical realities. In this case, add the ability to make very extensive empirical measurements to understand what is going on and what the system is actually doing. It's not like there's some grand unified garbage collection formula that one can pick up from a paper and simply apply.
04 Sep, 2009, Silenus wrote in the 10th comment:
Votes: 0
Well I was wondering if it would be possible to pick such skills up without just fumbling around in the dark. A significant amount of effort is typically needed in translating code from papers since the machine models are sometimes a bit abstracted and don't account for various factors in actual hardware. I assume by practical realities you mean this. I guess the question is suppose the system could be bashed around to generate extensive data about it's performance. Are there guides which give some general principles about how to do "system performance tuning" of this sort on software projects?
04 Sep, 2009, David Haley wrote in the 11th comment:
Votes: 0
I'm not aware of a guide that spells out exactly how to do this kind of stuff. Nobody ever sat me down and gave me a tutorial on performance tuning; I (and everybody I know who I've talked to) simply developed the intuition for it as I learned. There are basic principles and ways of handling different allocation patterns; there is some literature on that (and when I say literature, I don't mean a theoretical academic paper, but rather a practical case study).

You seem to have a very great resistance against learning this stuff (not just GC) empirically, but a lot of these areas are inherently empirical and extremely difficult if not impossible to pick up from theory or books, without going forth and writing code. It's one thing to understand general principles of memory allocation, but it's another to have a feeling for how those interact with specific hardware, language implementations, and so forth. But when tuning these, you need to understand the empirical realities, not theoretical.
04 Sep, 2009, Silenus wrote in the 12th comment:
Votes: 0
I tend to start with materials or books practical or not to get some general idea of the area I am looking at. I find for me it saves time. Writing code all fine and good but I suspect that having some idea of the basic principles can be quite handy. I am not sure what you mean by theoretical papers versus practical case studies. To me the programming literature ranges from very theoretical stuff (mostly all mathematics) to things which mix in some performance studies to sometimes very practical guides. In this case I am looking for a practical guide not a theoretical piece.

I basically dont particularly like writing code when I dont have a clear inkling of the area beyond say looking at some examples and perhaps reading some probably somewhat dated material in a textbook. I could implement this direct from a textbook but the performance given it would likely be a naive implementation would be atrocious. I find very empirical approaches tend not to work too well for me. Through just experimentation I tend to pick up alot of bad habits and have no clear idea in cases of when things do work why they might be working. I end up having to perhaps write four or five drafts of some code before I get anywhere close to getting it right(in this case the problem is even more severe for me because I am not sure I would have it right in any given version).

Basically I generally tend to move from theory -> pragmatics rather than the other way around.
04 Sep, 2009, David Haley wrote in the 13th comment:
Votes: 0
Silenus said:
I am not sure what you mean by theoretical papers versus practical case studies.

I was referring to papers that discuss theory of memory fragmentation and so forth vs. a case study of a particular VM, a particular set of allocation patterns, and practical observations.

Silenus said:
I end up having to perhaps write four or five drafts of some code before I get anywhere close to getting it right(in this case the problem is even more severe for me because I am not sure I would have it right in any given version).

This is the case for almost everybody writing almost anything. Almost nothing is correct the first time around. Trying to get everything perfect before even getting started usually means never getting started at all.

Besides, the very nature of this beast is that it's only really possible to tune and optimize it after having it in your hand.

Systems programming is hugely empirical and it's very difficult to learn without practicing. I'd compare it to learning to spar from reading a book; one could understand the general ideas but would be useless in even friendly sparring.

Part of the learning process is making mistakes, and accepting that that is part of the learning process.


In any event, I guess as far as your specific question is concerned, I can't help further as I am not aware of anything that spells all of this out. Scott Meyer's Effective C++ book covers some of the concepts, but probably not to your satisfaction.
04 Sep, 2009, Remcon wrote in the 14th comment:
Votes: 0
Well I did look it up yesterday and looked at it a bit, so more or less it is just recycling the memory instead of always disposing of it and creating new things. There are muds around that already do it manually and I've seen plenty of issues in their memory handling. My question is does it really save enough to even be worth all the time and effort it takes to recycle the memory? Or is it better to just do like it does and free the memory and use more. Shrug just figured I would ask :)
04 Sep, 2009, David Haley wrote in the 15th comment:
Votes: 0
Recycling object memory instead of reallocating and managing the pool every time can make a big difference if the memory allocation pattern is to quickly allocate and destroy tons of small objects.

Of course, most people don't really notice the difference because most people aren't running very real-time, highly-performance-sensitive applications with allocation patterns such as that.

A recent example I saw was a sub-millisecond latency real-time stock trading system written in Java. Because of how Java works, the heap is used up to wazoo, so it's extremely important for that to be efficient.
04 Sep, 2009, Remcon wrote in the 16th comment:
Votes: 0
Well feel free to correct me if I'm wrong here but at least in something I seen online about it, it looked like more or less it is freed and then recreated (or maybe I was just looking at a bad example of one or something). But on the creation it used like recreate or something and any clue why free it and recreate it or is that just a bad example. I know some of the muds don't actually free all of it they will keep the pointer itself just free lots of the data and later that data gets replaced with more data. Just curious which is best. Freeing it and using the recreate or just freeing all but the pointer itself and then putting new data in it later.
04 Sep, 2009, David Haley wrote in the 17th comment:
Votes: 0
What exactly are you talking about? Are you talking about a specific MUD codebase or the general idea of dealing with many allocations of small objects?

Even if you "delete" an object, it doesn't necessarily mean that the garbage collector goes and collects it right there: it might leave it around and recycle it the next time the same class is instantiated.

As to which is "best" for MUDs, as I said, I don't think it really matters. MUDs aren't extremely high-performance real-time systems allocating and destroying many thousands of objects per millisecond.
04 Sep, 2009, Silenus wrote in the 18th comment:
Votes: 0
I think at one point way back I remember some discussions between some of the maintainers of the MudOS lpmud server comparing it's capabilities to early versions of Java. I.e. stuff like bytecode size of different programs and the ilk. I think in principle a server program like MudOS is quite generic- you can create and run almost anything you want on it which is sort of why I am somewhat interested in all this optimization stuff. It probably is useless for the typical mud but it would be interesting to see how far you can push the platform(or something close to equivalent to it) which can still be used to run muds but can also be used for other purposes.
04 Sep, 2009, Silenus wrote in the 19th comment:
Votes: 0
Thanks David. I guess for this sort of thing it sounds like I dont have much of a choice. I did find some materials of how to measure changes in the performance of a given system (probably old hat to you) and different techniques for doing so(some paper on ACM had a list of references). I guess I will start with something simple and then try something more ambitious maybe to pair with one of the lpmud servers.
04 Sep, 2009, Remcon wrote in the 20th comment:
Votes: 0
Ah just asking to feed the curiousness :) I do understand that and I kind of found it pointless in the muds I've seen that keep around the data and then reuse it next time an object/mobile is created. I was kind of wondering of that is more or less what this whole garbage collecting is all about though. In what little I saw online (granted think the example I seen was in C++) it looked a bit different. In the muds I've seen that did it, it freed all but the actual pointer which it kept around and would reuse later for another of the same type, but in the one example I looked at for this fully freed the pointer and then used recreate right after words. Granted it was a small example, but it had me wondering if that is how it does it or not. You did kind of answer that though when you said it may not recreate it till another of the same type is called.

So you could full free the memory, and just use recreate instead of create for example to reuse the memory. Isn't that more or less the same as freeing it and creating a new pointer later? Let me see if I can word it differently here. If you take and let a mud run a very long time and it was always creating and removing things (like it would if played by a very massive amount of people), if you just used the normal create and dispose would you be wasting memory compared to using the create dispose recreate? Hope that makes sense anyways lol always doing to many things at once lol.
0.0/28