I've been (sloooowly) working on a Python MUD called BogBoa. Earlier this year, I took the network code from it and re-package it as a download under the Apache 2.0 license, primarily thanks to Idealiad's thread and feedback about a P.... During the extraction process I realized there was extra spaghetti in my code and, having temporarily b0rked my main code with some structural changes, I decided to start a proper project page for Miniboa and maintain it as the base for my MUD's networking.
Miniboa is a framework for communicating with remote clients via the Telnet protocol, nothing else. You add the elves, vampires*, and galactic overlords. The server is single-threaded, asynchronous, and designed to be called from within your game loop at whatever rate you like.
You can fire up a do-nothing server in three lines of Python:
from miniboa.async import TelnetServer server = TelnetServer() while True: server.poll()
This will launch a server on port 7777 and greet visitors with a canned welcome.
*glittery vampires are a violation of this thread's Terms of Service. Just no.
Changed TelnetClient.send(), to automatically convert Python newline '\n' to '\r\n' to match Telnet specifications.
Added TelnetClient.send_cc() - send text with caret codes coverted to ANSI escape sequences. TelnetClient.send_wrapped() - send text wrapped to the client's text window. Requires a previous TelnetClient.request_naws().
I included the xterm.py module that I use to color text with "caret-codes" – little shorthands that begins with a '^'. So '^b' changes text to dark blue and '^B' changes it to bright blue. This is something I used back in my BBS days (yes, I'm that old). For a list of caret codes see http://code.google.com/p/miniboa/wiki/Ca... .
These are optional. Sending text via TelnetClient.send() bypasses it.
Function "strip_caret_codes(text)" in xterm.py has ANSI_CODES instead of _ANSI_CODES, which throws an error whenever a client with ANSI turned off gets sent data.
EDIT: Ah! Someone on Google beat me to it! :redface:
@Barm: I noticed that the chat server example seems to poll the server fairly frequently (100 or maybe 1000 times a second). When I altered it to poll only 10 times a second there was a very small but nonetheless noticeable lag between when i entered a message and when I got the "You say…" message back. Are there any disadvantages to polling the server very frequently, as you seem to do in the examples/BogBoa?
22 Dec, 2009, David Haley wrote in the 11th comment:
Votes: 0
Why should you be polling X times a second anyhow? Shouldn't you just poll with a timeout, and then sit around waiting for activity to occur? That way, you can respond to events instantaneously, while not wasting time re-polling if nothing has changed.
@Barm: I noticed that the chat server example seems to poll the server fairly frequently (100 or maybe 1000 times a second). When I altered it to poll only 10 times a second there was a very small but nonetheless noticeable lag between when i entered a message and when I got the "You say…" message back. Are there any disadvantages to polling the server very frequently, as you seem to do in the examples/BogBoa?
The examples poll constantly because they are meant to be simple. Bogboa does it because I am simple :). It's just an tweak that I'm leaving for latter on when I get a feel for how long a normal game cycle takes. In Bogboa, the scheduler makes a quick call to time.sleep() in an attempt to be more CPU friendly.
The only disadvantage is extra CPU time which might be an issue with a hosting company. Uncapped + a significant number of players could become CPU intensive. That doesn't mean you can't cap it at something that still feels pretty snappy to humans.
Why should you be polling X times a second anyhow? Shouldn't you just poll with a timeout, and then sit around waiting for activity to occur? That way, you can respond to events instantaneously, while not wasting time re-polling if nothing has changed.
That's done inside of TelnetServer.poll() – it calls select() with a 0 timeout. After testing for new connections, data waiting to be sent, and data waiting to be recieved it returns control back to the author's game loop.
Why should you be polling X times a second anyhow? Shouldn't you just poll with a timeout, and then sit around waiting for activity to occur? That way, you can respond to events instantaneously, while not wasting time re-polling if nothing has changed.
That's done inside of TelnetServer.poll() – it calls select() with a 0 timeout. After testing for new connections, data waiting to be sent, and data waiting to be recieved it returns control back to the author's game loop.
Well, yes… but why do you need to force instant polling like this? This creates situations like what Confuto described, where you either poll a lot (at the cost of increased CPU usage) or you experience some delay. It's easier to design instantaneous polling, but perhaps you could have an optional parameter to TelnetServer.poll that takes a timeout to pass to select, so that the main game loop can choose to sit around doing nothing if there's no I/O or events pending.
Well, yes… but why do you need to force instant polling like this? This creates situations like what Confuto described, where you either poll a lot (at the cost of increased CPU usage) or you experience some delay. It's easier to design instantaneous polling, but perhaps you could have an optional parameter to TelnetServer.poll that takes a timeout to pass to select, so that the main game loop can choose to sit around doing nothing if there's no I/O or events pending.
Remember, we're dealing with a single-threaded, asynchronous server. We can know when the server is ready to send data, but not when data comes in or new clients connect until we call Select(). If we start passing a timeout to Select(), it will block for the duration of that timeout. Blocking is the enemy. Yes, we can block for short periods but there's really no advantage when those cycles might be useful to game loop, say heavy AI. If the server is wasting polls, then we can easily "nice" them back to the system. As long as well poll frequently enough, the players will never notice.
22 Dec, 2009, David Haley wrote in the 19th comment:
Votes: 0
Barm said:
We can know when the server is ready to send data, but not when data comes in or new clients connect until we call Select(). If we start passing a timeout to Select(), it will block for the duration of that timeout. Blocking is the enemy. Yes, we can block for short periods but there's really no advantage when those cycles might be useful to game loop, say heavy AI.
Blocking is only a problem if there is something to do. If there are pending AI computations, that means there is something to do. I guess I'm not seeing why the poll should be hammered just for the sake of it. This can be rather expensive if you're doing it enough times for enough connections.
You don't want to "nice" yourself back to the system because you're wasting time on polls: that reduces your overall priority, including when you're actually doing useful work.
If we start passing a timeout to Select(), it will block for the duration of that timeout.
Does it? If so, this is Python specific as the operating system version of select() does not do that. select() with a timeout returns either when the timeout expires OR when there is data ready.
And, holy cats, I even wrote documentation.
http://code.google.com/p/miniboa/
Miniboa is a framework for communicating with remote clients via the Telnet protocol, nothing else. You add the elves, vampires*, and galactic overlords. The server is single-threaded, asynchronous, and designed to be called from within your game loop at whatever rate you like.
You can fire up a do-nothing server in three lines of Python:
This will launch a server on port 7777 and greet visitors with a canned welcome.
*glittery vampires are a violation of this thread's Terms of Service. Just no.