26 Aug, 2014, ForgottenMUD wrote in the 41st comment:
Votes: 0
That's OK, Kelvin. I know your opinion because you've stated it several times already. Perhaps you are right, however it doesn't mean that all statements against multithreading were right. I was responding to a few statements that I forgot to quote.

Quote
THAT, in turn, means any gains you might have from multi-threading usually get lost in having the majority of your threads sitting on semaphore locks anyways.
26 Aug, 2014, Scandum wrote in the 42nd comment:
Votes: 0
Rather than threads use two separate programs that communicate. Then the main issue becomes how to communicate clearly and efficiently, which is where things get interesting.
26 Aug, 2014, Kelvin wrote in the 43rd comment:
Votes: 0
ForgottenMUD said:
That's OK, Kelvin. I know your opinion because you've stated it several times already. Perhaps you are right, however it doesn't mean that all statements against multithreading were right. I was responding to a few statements that I forgot to quote.

Quote
THAT, in turn, means any gains you might have from multi-threading usually get lost in having the majority of your threads sitting on semaphore locks anyways.


Who cares, though? If you are creating a heavily threaded MUD for fun, go do it and stop defending it here. There's no need to defend tinkering for the sake of tinkering.

If you are legitimately creating a heavily threaded MUD because that's the best way you can think to do it, listen to the advice here and consider alternatives.

/thread
26 Aug, 2014, plamzi wrote in the 44th comment:
Votes: 0
Kelvin said:
Idealiad said:
@plamzi, there is a clear benefit to writing threaded code, which koteko mentioned in passing earlier. It is much easier to reason about (i.e. understand the flow of) threaded coded compared to async/callback code.


I'm sure it depends on the language/toolset, but I have a hard time agreeing with threaded code being easier to understand than async code. Especially for languages that support inline callbacks.


koteko said:
You are wrong: it makes my code more comfortable for me to work with. That's enough on its own.


I'm willing to concede that async code vs. multi-threaded can go either way depending on the details (including prior experience). But I would have to be very drunk indeed for someone to convince me that adding thousands of threads to single-threaded code makes their code "more comfortable to work with."

10 tequila shots later, I would also probably begin to believe that adding all those threads to a MUD server provides more than a 5% performance boost.
27 Aug, 2014, koteko wrote in the 45th comment:
Votes: 0
Plamzi, to "add multithreading" to single-threaded code would be foolish indeed, unless you really need to (which implies benchmarking). I'm coding from scratch in java, and multithreading was just the simplest approach for many issues (first and foremost, network IO, but also task scheduling, DB persistence, Mob AI…). So I don't "add multithreading", I do multithreading as it's more natural for me, in Java at least.

Let's look at the one-thread-per-client model ForgottenMUD uses (and I used it too until a few days ago - I've switched to a library now). Just an example of course, can be done in many ways. All starts within a java class, that provides methods to be stopped and started, like:

clientListenerService.start();
// after this line, you know that in background clients are being accepted,
// so you can enter the game loop/do other stuff as you wish


The "start" method just spawns a thread for this service that in turn runs its "run" method, so defined:

public void run()
{
// entry point for client connections
ServerSocket serverSocket = new ServerSocket(TCP_PORT);

while (!Thread.currentThread().isInterrupted())
{
(new ClientHandler(serverSocket.accept())).start();
}
}


The call to "accept" blocks until there is a client connection coming in. At that point, it returns the client socket that we pass to the ClientHandler constructor. Note that the cpu usage when a thread is blocking is close to 0 in modern jvms and cpus. Even if you have hundred of thousands of threads.

The object ClientHandler would also be a "Runnable" like the ClientListener class. "Runnable" is a class that can be run in a thread or put in an ExecutorService (bunch of classes that provide thread pools, shutdown mechanisms, scheduling for future threads etc for free!).

Now, within the "ClientHandler" execution you can put your logic for handling players commands, for example. When you work on the ClientHandler class, you can effectively code with the idea that the flow will only affect a single player.

For example I can do database access there, because if a single client has a few tens of milliseconds of added latency it's not a big deal. I also put a sleep with the intentional added latency I want between each player's command.

The interesting part is that if the "clientListenerService" is shutdown the java way, all threads within are going to receive an "interrupt" that they will check with "currentThread().isInterrupted()", so each client will be nicely closed.

This for me is more comfortable than having asynchronous IO. It could be different for other people. But it's just personal preference of course. It reminds me of friends who couldn't use recursive functions to solve problems, or collegues who have problems using functional languages. Nothing "objective", just preference and experience I guess.
30 Aug, 2014, plamzi wrote in the 46th comment:
Votes: 0
koteko said:
Plamzi, to "add multithreading" to single-threaded code would be foolish indeed, unless you really need to (which implies benchmarking). I'm coding from scratch in java, and multithreading was just the simplest approach for many issues (first and foremost, network IO, but also task scheduling, DB persistence, Mob AI…). So I don't "add multithreading", I do multithreading as it's more natural for me, in Java at least.


I actually understand this line of reasoning very well, and it makes a lot more sense when you put that way. It sounds very similar to why I decided to stay async throughout in my node.js server. But before I did, I thought long and hard about what I would gain vs. what I would lose. The fact is it's very doable to write a "limited async" MUD server in node.js and a single-threaded MUD server in Java (with multi-threading used only in cases where it will really shine). In other places, all you have to do is buffer I/O and evaluate it sequentially. The benefits in terms of simplicity are extremely significant. And the bottom line is that if you do that, you will not lose performance in any noticeable way.

The reason I decided to embrace async has mostly to do with the fact that I'm trying to achieve very high persistence, and it involves a lot of db I/O. In your case also, there will be a lot to be said for using threads in the db code / lib. It will also be silly not to utilize multithreading when you do stuff like DSN lookups on player IP addresses or have your server code post a tweet or send an email, e. g. Even one thread per player is a pretty reasonable proposition, for the reasons you stated in your latest post, and FMUD has done just that.

That being said, it sounds like you took the multithreading concept and really ran with it when you decided to try a thread per room. To a lot of people in this thread, this seems like overkill that is very likely to come back to haunt you down the line.
30 Aug, 2014, Pymeus wrote in the 47th comment:
Votes: 0
The usual reason people want to tack on threads to their mud is the performance boost they perceive to be inherent to threaded programs. That's usually a difficult path to go down. Concurrency is most useful when your mutable data is guaranteed to not be interrelated (else you need some form of locking). That isn't especially common in muds.

But if the use of threads is motivated by a preferred coding style, that's a perfectly sane design decision. A single master lock (or a handful of coarse-grained locks) on the game data has negligible effects on complexity and performance, but still gets that desired coding style.
31 Aug, 2014, ForgottenMUD wrote in the 48th comment:
Votes: 0
koteko said:
I do multithreading as it's more natural for me, in Java at least.


It's natural to think in terms of threads, but it's not to add locks. However, after doing some research, my understanding is that it's much easier in Java because multithreading is well supported. Which is what I thought when I saw they use multithreading in the socket tutorial. And if you want it to make it easy and 100% safe, you can type, in your client/command handler,

synchronized(room/area) { <all your commands> }


And storing the room/area parameters in a ConcurrentHashMap (can't use room IDs directly as parameters because seemingly identical IDs can refer to different objects). Synchronized creates a queue on the parameter. Also, new keys in a ConcurrentHashMap must be added with putIfAbsent(), anything based off a read/retrieval action creates race conditions, cf. API.

This is effectively the same as one thread per room, which is what the OP wants to do to avoid locks, but the simplest possible implementation.
31 Aug, 2014, Rarva.Riendf wrote in the 49th comment:
Votes: 0
> This is effectively the same as one thread per room, which is what the OP wants to do to avoid locks

You cannot avoid lock. Ever, if you have rooms.
Once I started to think about multithreading cause my pathfinding algo I inherited from an old codebase was a bottleneck.
After some talk here, I realised that:
1: The fac tit was a bottle neck was cause it was coded stupidly.
2:multithreading was a nightmare to implement as calculation of a path when you do not even know if a room will not disappear (or randomly be closed etc) while you are calculating the path is insane without locking all the rooms.

And since moving is huge part of the life of a mud (I mean something a little lively) well…you do not want to multithread. Cause you would actually lock all the rooms like all the times, rendering useless multhreading them in the first place.
31 Aug, 2014, ForgottenMUD wrote in the 50th comment:
Votes: 0
Rarva.Riendf said:
You cannot avoid lock. Ever, if you have rooms.


I don't think you followed the OP's logic. He doesn't want to manage locks. His idea is to make a thread for each room. Then, the threads are not in competition to modify room containers hence you don't need to lock them.

As for your pathfinding, your technique is not universal. I assume you need to lock your whole path because you treat pathfinding as one big teleportation command. Pathfinding can send a bunch of commands (e.g. n, n, w, e) which means you only lock rooms one by one. Also, the map is usually(?) not dynamically modified but cached. If you bump into an obstacle that didn't exist before, it updates the cache, and stops or finds a new path. Caching drops the cost. This is how the MUSHclient mapper works.

BTW, not all servers have changing maps: I've seen several MUDs with static maps using teleporters as abstraction/replacement for doors (DarkEden to name one).
31 Aug, 2014, Rarva.Riendf wrote in the 51st comment:
Votes: 0
> I assume you need to lock your whole path because you treat pathfinding as one big teleportation command.

No I would need a lock so I can actually calculate it. As I have rooms that can disappear, exit that can change etc. If I did not 'lock' all the rooms I could 'theorically' end up never being able to complete the task, even if it is a very low probability (as most of the mud is 'stable'). Once calculated, the command are sent, and well if the path changed, off course the player will be stopped and he will have to ask again (cause I do not want to force him to move endlessly either)
For perfomance purpose I do use some caching with precalculated path between 'stable' areas that I know wont change of places.

The OP does not seem to care about those problems. (remember:Consistency is a trade-off in many systems. You haven't worked in a particularly big and competitive company if you don't understand that)

I am sure people working in high frequencies trading dont care about it either, after all the banking system is a pretty small and uncompetitive business…
31 Aug, 2014, plamzi wrote in the 52nd comment:
Votes: 0
Rarva.Riendf said:
> I assume you need to lock your whole path because you treat pathfinding as one big teleportation command.

No I would need a lock so I can actually calculate it. As I have rooms that can disappear, exit that can change etc. If I did not 'lock' all the rooms I could 'theorically' end up never being able to complete the task…


We're getting offtopic here, but I believe what FMUD is trying to say is that you don't need to return the whole path at once. Your pathfinding command could just return the next move towards the goal, and you re-eval when the next move is complete (or fails). In that case, you may even get away with no locks at all (not sure, no time to think this through) because even if the next room disappears, the pathfinding will try to find another route after the move fails.
31 Aug, 2014, Rarva.Riendf wrote in the 53rd comment:
Votes: 0
plamzi said:
We're getting offtopic here, but I believe what FMUD is trying to say is that you don't need to return the whole path at once. Your pathfinding command could just return the next move towards the goal,


Sure you can try to mitigate the problem, but it will add a lot of complexity, caching etc.
Because you will have to tell me how you can only calculate a path to a direction only one room after the other. AND make it fast as well, since you need to calculate this path each room you go trhough.
01 Sep, 2014, Pymeus wrote in the 54th comment:
Votes: 0
I'm not familiar with the implementation details, but traditionally that's how dikurivatives have done tracking skills and mob memory/hunting since the mid-90s. I'm skeptical of it being an overly expensive calculation on modern hardware. As long as you don't need everything to be pathing all the time.
01 Sep, 2014, alteraeon wrote in the 55th comment:
Votes: 0
I agree that in Java, using threads to manage your network I/O probably makes sense (as long as you don't process commands from those threads.) It may even make sense to have a thread handling DB transactions. However, for pretty much everything else, it doesn't, and the reason is simple: timing.

In short, the state of the game server has to advance relative to some global timebase, and the flat out simplest possible way to do that is to have a single thread do a global update on every internal timer tick. It's pretty simple:

while(1) {
parse_socket_commands()
run_mob_AI()
try_sync_database()
etc…
wait_for_next_timer_tick()
}


That's it. All those things you did with threads before? Just run them as functions from a single main loop, instead of running them as functions from within a thread. No concurrency issues, no sync issues, no locks, and no problems.
01 Sep, 2014, Pymeus wrote in the 56th comment:
Votes: 0
Threaded models and coroutine models are both significantly more scalable and efficient than update loops. I'm not a fan of gratuitous concurrency, but the simplest and most natural way to delay something, without holding up everything else, is usually the concurrent way:
synchronize( someLock ) {
ch.startCasting( sp );
}
Thread.sleep(1000);
synchronize( someLock ) {
ch.castSpell( sp );
}


Let me disclaim: I would not choose to write a mud that way (it's not my taste), but the technique is reasonably effective and extremely easy to read.
02 Sep, 2014, alteraeon wrote in the 57th comment:
Votes: 0
While I can understand the scalability argument, I find that 'ease of implementation' is far more important for now. I'm running 100 concurrent players, 20k mobs, 95k objects, and 50k rooms across 600 active zones at less than 3% of one processor core with a single threaded update loop. Only when I can push that model no further would I even consider switching to a more complex and significantly harder to work with threaded model.

Now that I'm thinking about it, I would probably only thread from the edges inward and leave the bulk of the main update loop intact: the weather and mana flow systems are very nearly readonly and could be handled by a thread each. The socket stack is complex and contains much code, and could also probably be farmed off to another thread. There's a lot of stats collection and processing which might be worth isolating.

In the event of an overnight 10x growth in player load, my biggest concerns would be managing congestion, managing channel/social issues, and looking for possible performance bottlenecks due to nonlinear algorithm scaling. While there would definitely be growing pains, I feel like the current codebase and area set could handle a thousand concurrent players. I would estimate a single core processor load of between 10 and 20% at 1000 concurrent users, as the bulk of the processor load now is autonomous activity and not player activity.

In the event of an overnight 100x growth in player load to 10k players, my biggest immediate concern would be available instances. Our current instance space would probably start to run dry at around 1200 concurrent players, and I could push it up to around 7k players with minor changes, but there's some old crufty restrictions which would need to be removed before going higher than that. After that, I would expect my single threaded loop to run pegged pretty much all the time, with no wait state and probably constantly falling behind. That said, I wouldn't be at all surprised if it did actually run and remain playable (if a bit slowly) without threading once the initial scaling issues were dealt with.

However, long before we hit concurrency performance issues, there would be some pretty unmanagable problems with congestion. The new player zones would likely be among the worst: an average of 0.5 players in the newbie encampment becomes a reasonable 5 players at 10x growth, but an unmanageable 50 players at 100x. Cramming 50 noobs into an 8 room starting area is a recipe for disaster, and there are many places where area congestion would be the overwhelming limiting factor at 100x current scaling.
02 Sep, 2014, Rarva.Riendf wrote in the 58th comment:
Votes: 0
Don't even bother, I have a quest mode where I make all the mobiles act like zombies, rabidly hunting moving grunting and looking for fight (even when no player around, they hunt the uninfected mobiles as well), grouping etc hence mimicking players.

Around 9k of them at the same time…and it is not really pushing the enveloppe.
Try it, it is fun…it is only a few line of code to do it. a zombie basically being about: scanning, going to target grunt and start a fight to eat (or infect if the fight is long enough)
I even make them level, and turn into lich when they have some kills. (and the lich then control the other zombie and hunt in pack)
03 Sep, 2014, plamzi wrote in the 59th comment:
Votes: 0
alteraeon said:
While I can understand the scalability argument, I find that 'ease of implementation' is far more important for now. I'm running 100 concurrent players, 20k mobs, 95k objects, and 50k rooms across 600 active zones at less than 3% of one processor core with a single threaded update loop.


Right, so one of the reasons the async model gained traction so fast was that the "thread-per" model hit many real-life limitations purely in terms of RAM. Specifically, thousands of concurrent site users on a "thread-per-client" server meant the need to throw ever-increasing amounts of hardware at the problem.

Now, thousands of concurrent MUD players may be a pipe dream, and you're probably well in the clear as long as your model requires anything less than 2-3k threads. All you have to pay is the high complexity toll. But if you are running a thread per room, then it seems to me you've just happily re-introduced the hardware limitation issue. 10K rooms is fairly average for a long-running MUD, with quite a few exceeding that size by orders of magnitude. And a grid-based world can reach millions very easily.

If you are convinced you want something along the lines of "thread-per-location", I would suggest looking into a "thread-per-area" option. This can make a lot of sense if you have area instancing, or if areas are otherwise fairly isolated.
40.0/59