09 Aug, 2010, JohnnyStarr wrote in the 21st comment:
Votes: 0
To me there is a difference between a truly persistant world and a seemingly persistant world. I think a more
sound design would be to save the state of any game-object when the area went to standby mode (after specified time
of no player activity). Then, adjust the object acordingly based on the timestamp on standby, to the timestamp of
activation. If an "oak tree" for example had only 45 minutes of life left before it toppled over, then you could
subtract the standby time by the activation time (let's say 30 minutes in this scenario) the lifespan of the tree
has jumped to have only 15 minutes left.

Some care would have to be factored such as if all timers have run out for an area that has been inactive for a
long time, not to fire off all events simultaneously and such. But you wouldn't have to have the overhead of thousands
of rooms, objs, mobs, etc. running 24-7
09 Aug, 2010, David Haley wrote in the 22nd comment:
Votes: 0
Re: persistence: persistence isn't the word you're looking for. What you're describing is more like an autonomous world. Persistence is when state is preserved. You can have a persistent world without much activity; you don't need autonomous AI activity to be persistent.

Quote
I feel like on an ideal MUD, the tree does fall in the forest if there's no one around to hear it.

Well, an ideal MUD, but perhaps only realistic on a small MUD. If you have a MUD with hundreds of thousands of independent agents, and you want them to be relatively intelligent, you really don't want to be simulating all those AI routines when it all will go unnoticed.

If you really want the world to move when players aren't there to see it – and this can indeed be useful in some cases – it's better to shift the focus to macro-level algorithms that simulate in bulk when players aren't there, and shift to micro-level algorithms that simulate mobs individually when a mob's individuality actually matters (e.g., during combat).
09 Aug, 2010, Oliver wrote in the 23rd comment:
Votes: 0
David Haley said:
Re: persistence: persistence isn't the word you're looking for.


Huh. I guess I could be misusing it? Wikipedia defines it as "[A persistent world (PW) is] a virtual world that continues to exist even after a user exits the world and that user-made changes to its state are, to some extent, permanent."

I feel like a world that is dynamically loaded/saved when players leave doesn't really fit into that definition. But I don't know a whole lot about game design/nomenclature, so forgive me if I'm wrong.
09 Aug, 2010, David Haley wrote in the 24th comment:
Votes: 0
Well, yes, the world is still around even if there are no players, and changes stay around and aren't cleared on reboot. But that doesn't mean that it's autonomous.

Contrast with worlds in many generated-level games, where the world vanishes as soon as the last player leaves. Diablo is an example of this. Yet other games have autonomous activity (i.e. activity away from and independent of players) but also disappear when the players leave.

So really, autonomy and persistent state are different notions.
09 Aug, 2010, Runter wrote in the 25th comment:
Votes: 0
Consider a mud with no goal but to traverse mazes. In such a mud there is no autonomy. However, perhaps the state of doors persist for open or shut.
09 Aug, 2010, Scandum wrote in the 26th comment:
Votes: 0
Davion said:
Ya know, suggesting this particular implementation in this condition seems a bit… wrong. Considering to cycle through every element of the array, you'd have to go through every spot regardless of the room existing or not. In this case, with the hash tree, you're wasting very few iterations.

If the array is more than 40% full you'll save memory, and given arrays get cashed particularly well, looping through it will be roughly 10 times faster than going through a fragmented linked list.

If you want an even bigger speed gain you can loop through each area, next loop from the area's low vnum to the area's high vnum. This might slow things down if the array is more than 80% full.
09 Aug, 2010, David Haley wrote in the 27th comment:
Votes: 0
It's always better when empirical numbers are backed up with empirical evidence…
10 Aug, 2010, Scandum wrote in the 28th comment:
Votes: 0
That memory is saved if the array is more than 40% full is pretty much a fact.

The 10x estimate is probably too high, but it's at least 2x slower.

http://blogs.msdn.com/b/devdev/archive/2... said:
Let's make this concrete: using Gentoo Linux and gcc on a Pentium 4 3.5GhZ machine, I constructed a linked list of 60 million integers and created an array of the same 60 million integers. I compiled with full optimization. Traversing the linked list required 0.48 seconds, while traversing the array required 0.04 seconds, 12 times faster. Moreover, when I introduced code to fragment the memory pool, the advantage increased dramatically to 50 times or more. The linked list also required 4 times as much memory - twice as much for next pointers, and twice as much again for allocation metadata.
10 Aug, 2010, David Haley wrote in the 29th comment:
Votes: 0
Scandum said:
That memory is saved if the array is more than 40% full is pretty much a fact.

Sigh. :thinking:

Consider an array of size 1,000 with 401 elements in it. It is therefore more than 40% full. Clearly, the size of the array is 1,000 * sizeof(element).
Since elements are pointers, we get: 1,000 * sizeof(pointer).
Let's write p to denote the size of a pointer.
The array therefore uses 1000p bytes.

Let's consider a hash table to store these 401 elements. The size of the hash table is: num_buckets * p + num_elements * p
Explanation: one pointer per bucket; and then, each stored element requires a pointer to the next element in its bucket. We therefore have num_buckets * p for the bucket pointers, and then num_elements * p for the elements.
So the size of the hash table in this case is:
num_buckets * p + 401p

Which is bigger? Let's assume 200 buckets, shall we?

we get: num_buckets * p + 401p = 200p + 401p = 601p < 1000p

Well golly gee, that's actually smaller than the array, even though the array was more than 40% full.

Heck, let's use a straight linked list: then we only use 401*p bytes (one pointer per element to the next one).

So much for things being pretty much fact.

Perhaps you are assuming a linked list that separates the list nodes from the elements being stored (i.e., not the actual topic under discussion, namely how Dikuland does things).
In that case, a linked list with 401 elements will have 401 nodes each of which has 2 pointers; therefore the memory used is 401*2*p = 802p.
Even in this case, we're using less memory than the linked list……

So like I said – if you're going to make claims like this, why don't you show your work? Claiming random statements are "fact" is not helpful to anybody, and will put bad ideas into people's heads if they don't stop and question your claims. (If you keep providing unsubstantiated claims, even if they're true, people should always question them – statements made in the air are not educative to people.)
10 Aug, 2010, quixadhal wrote in the 30th comment:
Votes: 0
Using the array idea takes us back to the original DikuMUD, which used "vnums" in the files (tinyworld.wld and friends), and "rnums" as the position in their respective arrays they were loaded into. Many a tear was shed over mistakenly using a vnum where you mean to use an rnum, and visa-versa.

As to persistence…. that requires the world continue to function as normal, regardless of the presence of anyone to witness it. It's up to you if you want to have individual goblin AI's deciding to move their camp closer to the city because nobody is attacking them, or if you want a "npc AI" that decides to do so for the goblin village as a whole, or if you just want the "spawn goblins" mechanic to start picking rooms closer to the town.

If the goblins and orcs are supposed to fight over resources, one expects to log back in and find things different than they were when you logged out last month. It doesn't really matter if you actually make the NPC's fight, or change the spawn points and descriptions around so it LOOKS like they fought and one side lost ground. The main thing is, the world persists through reboots, and through times of no player activity. The whole place doesn't just go to sleep and wait for the next player to log in before continuing to evolve.

That said, you wouldn't be able to tell on the majority of games out there, since they don't actually change anyways. Most MUD's treat npc's as harvestable fruit trees that never run dry… goblins spawn in these places, and just stand there waiting for someone to come harvest them. No goals, even those that wander just do so randomly and without any purpose.
10 Aug, 2010, David Haley wrote in the 31st comment:
Votes: 0
As a note for the pedantic, I took a very small liberty above, because you need to take into account the size of elements pointed to for the list case. But it turns out that it doesn't affect the result; i'll leave it as an exercise to the reader to see why. :wink:
10 Aug, 2010, Rarva.Riendf wrote in the 32nd comment:
Votes: 0
Quote
Most MUD's treat npc's as harvestable fruit trees that never run dry… goblins spawn in these places, and just stand there waiting for someone to come harvest them.'

I wish to know from experienced mud owners :(I am basically only a coder, and a player that never found any interest in fighting mobs, since as a coder…I know how it will end up anyway ;p)
Having mobs with a more intelligent behaviour really appeal players ? Do they stay because of that or do you need something else anyway ?
I always preferred the pk side of the muds (most of the code I do is to make so pk fight are enjoyable, not too short not too quick, and no ultra combo that always work, whatever the class, and making every little skill have an interest)
10 Aug, 2010, Scandum wrote in the 33rd comment:
Votes: 0
David Haley said:
Perhaps you are assuming a linked list that separates the list nodes from the elements being stored (i.e., not the actual topic under discussion, namely how Dikuland does things).
In that case, a linked list with 401 elements will have 401 nodes each of which has 2 pointers; therefore the memory used is 401*2*p = 802p.
Even in this case, we're using less memory than the linked list……

We were assuming 200 buckets for the hash table, so for a singly linked list that'll be 200 + 401 * 2 = 1002 pointers, which is more memory.

Rarva.Riendf said:
Having mobs with a more intelligent behaviour really appeal players ?

It's like real life or chess, different games attract different people. The big problem is that as a MUD you attract a particular crowd, then if you change the game drastically that might result in that particular crowd leaving, which in turn leaves your MUD without the critical mass required to attract a new playerbase that does find your game appealing. So the bottom line is that it's all about marketing.
10 Aug, 2010, David Haley wrote in the 34th comment:
Votes: 0
Quote
We were assuming 200 buckets for the hash table, so for a singly linked list that'll be 200 + 401 * 2 = 1002 pointers, which is more memory.

So, as I said, you're looking at a case that isn't what Diku actually does, because for Diku, the linked list nodes are the elements themselves… in other words, you only have 200 + 401.

Furthermore, nothing prevents you from implementing a hash table with arrays under the hood instead of linked elements.

Basically, using a straight array for the rooms with the vnums as the key simply isn't the best of ideas.
10 Aug, 2010, Scandum wrote in the 35th comment:
Votes: 0
David Haley said:
So, as I said, you're looking at a case that isn't what Diku actually does, because for Diku, the linked list nodes are the elements themselves… in other words, you only have 200 + 401.

No idea what you're talking about. In Merc you have have a hash table (which is an array) of 1024 elements. Each room index has a next pointer that wouldn't be required if an array was used, so at a population of 50% minus the hash table size is where you start saving memory.

David Haley said:
Furthermore, nothing prevents you from implementing a hash table with arrays under the hood instead of linked elements.

You'd still lose out on O(1) random access and insertion, and add considerable bloat.

David Haley said:
Basically, using a straight array for the rooms with the vnums as the key simply isn't the best of ideas.

Naturally, the simplest and fastest implementation can't be the best of ideas.
10 Aug, 2010, David Haley wrote in the 36th comment:
Votes: 0
Quote
No idea what you're talking about.

Then maybe you should look more closely at the implementation you're criticizing before telling people how it works.

Quote
Each room index has a next pointer that wouldn't be required if an array was used, so at a population of 50% minus the hash table size is where you start saving memory.

Another random assertion without any explanation whatsoever. I won't bother going through the math on this one; doing your homework for you once was enough. :sad:

Quote
You'd still lose out on O(1) random access and insertion, and add considerable bloat.

How often, exactly, is one interested in O(1) random access/insertion for rooms…?

Besides, your array scheme can have O(n) insertion, if you need to resize the array of rooms…
10 Aug, 2010, Rarva.Riendf wrote in the 37th comment:
Votes: 0
I don't know why you are even arguing, iterating through the rooms whatever the method probably only takes a few millisecond, whatever the method used.
Optimizing there should be the last of anyone concern.
Same for memory.
10 Aug, 2010, David Haley wrote in the 38th comment:
Votes: 0
My objection is with respect to assertions made without explanation, especially when they are wrong, as I stated in post #27.
I said:
It's always better when empirical numbers are backed up with empirical evidence…

I don't really care what data structure is used here; but I do care about misinformation or misleading information being given.
10 Aug, 2010, Scandum wrote in the 39th comment:
Votes: 0
David Haley said:
Another random assertion without any explanation whatsoever. I won't bother going through the math on this one; doing your homework for you once was enough. :sad:

That was an explanation, too bad it's beyond you.

Quote
How often, exactly, is one interested in O(1) random access/insertion for rooms…?

Depends on what you are doing, but assuming one will use an array for object indexes it'll speed up inventory loading. The option to use indexes instead of pointers is an advantage as well, especially when pointers are 8 bytes, and like with Diku areas you don't have to fix exits afterwards as an index can be created before the actual room becomes available, saving memory in the exit structure as well as it doesn't need both an index and a pointer.

Quote
Besides, your array scheme can have O(n) insertion, if you need to resize the array of rooms…

That's so silly an argument I'm not even going to bother refuting it, but thanks for playing.
10 Aug, 2010, David Haley wrote in the 40th comment:
Votes: 0
Quote
That was an explanation, too bad it's beyond you.

I suppose that an alternative to explanation is to insult the reader's intelligence, in hopes that they will go away and forget that claims were not explained.

Quote
Depends on what you are doing, but assuming one will use an array for object indexes it'll speed up inventory loading.

So, ah, if we are talking about access patterns for rooms, why are access patterns for objects relevant?

I asked quite specifically why it is important to have O(1) random access/insertion for rooms.

Besides, the whole point of a hash table is that it effectively makes access and insertion O(1), so I don't know why you're claiming O(1) for the array as some kind of advantage. They're both O(1). The array constant time is faster, yes, but not asymptotically so.

Quote
The option to use indexes instead of pointers is an advantage as well, especially when pointers are 8 bytes, and like with Diku areas you don't have to fix exits afterwards as an index can be created before the actual room becomes available, saving memory in the exit structure as well as it doesn't need both an index and a pointer.

This I have no disagreement with, although it forces people into using numbers only as identifiers, which as we discussed earlier and elsewhere has been shown to be suboptimal.

Quote
That's so silly an argument I'm not even going to bother refuting it, but thanks for playing.

Pray tell how exactly you are going to resize the array of rooms without having to copy the entire array?
20.0/46