31 Jan, 2009, quixadhal wrote in the 41st comment:
Votes: 0
Scandum said:
long long works fine on 64 bit platforms, make sure to use LL in your flag definition: for example #define BV33 1LL << 32


So, what's the sizeof() of a long long on a 64-bit platform then? On my 32-bit platform, int and long are both 32-bit, long long is 64-bit. On your 64-bit platform, does that make long and long long the same? Is long long 128-bit? What about 5 years from now when we HAVE 128-bit platforms?

I just think it's a bad design to rely on something that has changed in the past, and is likely to change again in the future. The bits are used as boolean flags, not as numerical values, so why try to wrangle around the numerical values?

The extended bitvector code helps, since you can specify bit 74 if you want, it just uses ugly macros to translate that into (foo[2] & 1<<9). You have to fix those macros when the size of an integer changes, or convert them to use unsigned char's instead. IE: foo would be a char[] instead of an int[], and it'd be (foo[9] & 1<<1). Using enums, you could make it a little prettier.

Using C bit fields, it'd be more like:
struct room_foo {
int vnum;
….
struct room_bits {
unsigned dark : 1;
unsigned death : 1;
} bits;
} room;

struct room_foo *the_room;


if(the_room->bits.dark) eaten_by(ch, grue);
31 Jan, 2009, quixadhal wrote in the 42nd comment:
Votes: 0
Lobotomy said:
Sharmair said:
(one thing though, the int should be changed to unsigned int in the struct, and for that matter all 32 bit flag sets).

Correct me if I'm somehow wrong here, but I've always been under the impression that the signed-ness of the variable has no significance considering the use of bitwise operations since the sign bit is used as a flag as well. I.e, you're going to have 32 bits, thus 32 flags, available to use no matter if its signed or not.


You do have all 32-bits, however the value of the entire variable may be different. If I declare a char c and an unsigned char u, and set them both to be 255 (all bits set), When I look at the high bit (1<<7), I get back 128 from both of them… however, printing one gives 255, printing the other gives -1.

It may matter if you load/save the flags as whole integers rather than breaking the bits out and saving them seperately.
31 Jan, 2009, Sharmair wrote in the 43rd comment:
Votes: 0
As far as the color in the create issue. Once the player has picked a name, the descriptor has a
CHAR_DATA, so, why not just have the load_char_object() function (where the CHAR_DATA is
created) set the ansi flag (in the part where it sets up all the starting values) and just use
send_to_char_color() or ch_printf_color() (and the like) to send the output. Though most of the
old coders have sent to descriptor in nanny(), there is an active ch in the function you can use
(and I have in my SMAUG derivative), at least after name.
31 Jan, 2009, Sharmair wrote in the 44th comment:
Votes: 0
Lobotomy said:
Sharmair said:
(one thing though, the int should be changed to unsigned int in the struct, and for that matter all 32 bit flag sets).

Correct me if I'm somehow wrong here, but I've always been under the impression that the signed-ness of the variable has no significance considering the use of bitwise operations since the sign bit is used as a flag as well. I.e, you're going to have 32 bits, thus 32 flags, available to use no matter if its signed or not.

There are a number of reasons you might want to use unsigned here. For one thing, it is just more
conceptually correct. There was a stock SAMUG bug related to just this, as I recall, there was at least a
warning, and it could have been the flag did not work (the bit in the sign place). Some of the primitives
also work different on negative signed and unsigned values, remainder and right shift as an example.
31 Jan, 2009, Scandum wrote in the 45th comment:
Votes: 0
quixadhal said:
Scandum said:
long long works fine on 64 bit platforms, make sure to use LL in your flag definition: for example #define BV33 1LL << 32


So, what's the sizeof() of a long long on a 64-bit platform then? On my 32-bit platform, int and long are both 32-bit, long long is 64-bit. On your 64-bit platform, does that make long and long long the same? Is long long 128-bit? What about 5 years from now when we HAVE 128-bit platforms?

What makes you think we'll have 128-bit platforms in 5 years? It's gonna be 50 years at the very least unless a functional AI is created and there are some revolutionary break-throughs in the hardware industry.

Anyways, a long long is 64 bits. It might be increased to 128 bits at some time, but that's really no concern in this case.
31 Jan, 2009, Mister wrote in the 46th comment:
Votes: 0
I use uint32_t, uint64_t and similar, to ensure the numbers of bits I need. Those are defined in the stdint.h standard library.
01 Feb, 2009, David Haley wrote in the 47th comment:
Votes: 0
Lobotomy said:
Correct me if I'm somehow wrong here, but I've always been under the impression that the signed-ness of the variable has no significance considering the use of bitwise operations since the sign bit is used as a flag as well. I.e, you're going to have 32 bits, thus 32 flags, available to use no matter if its signed or not.

The compiler does math differently depending on whether or not the bit is signed, especially when you start dealing with overflow. See for example this or this.
01 Feb, 2009, David Haley wrote in the 48th comment:
Votes: 0
Scandum said:
What makes you think we'll have 128-bit platforms in 5 years?

I know man. Those people saying we'll have 64-bits soon are just talking crazy.

The point is that this kind of reasoning is bad because it introduces assumptions where there shouldn't be any. If you really, really mean 'x' bits, you should say 'x' bits, and not assume that just because it works now on some implementation it will always work.
01 Feb, 2009, Scandum wrote in the 49th comment:
Votes: 0
DavidHaley said:
Scandum said:
What makes you think we'll have 128-bit platforms in 5 years?

I know man. Those people saying we'll have 64-bits soon are just talking crazy.

You mean 128 bits? When 46116860 terrabytes of ram becomes common I'm going to give it 4 more years. ;)

DavidHaley said:
The point is that this kind of reasoning is bad because it introduces assumptions where there shouldn't be any. If you really, really mean 'x' bits, you should say 'x' bits, and not assume that just because it works now on some implementation it will always work.

I disagree. In this case it's as safe as it would be to assume that a program working on a computer with 1GB of memory will also work on a computer with 4GB of memory. A bit check should still work if a long long becomes 128 bits.
01 Feb, 2009, David Haley wrote in the 50th comment:
Votes: 0
Scandum said:
You mean 128 bits? When 46116860 terrabytes of ram becomes common I'm going to give it 4 more years. ;)

No, I meant 64 bits. I meant that making strong assumptions as to what tomorrow's needs will be based on today's is an exercise pretty much doomed to failure, as we (should by now) have learned many times over the history of computing (and, well, life in general).

Scandum said:
I disagree. In this case it's as safe as it would be to assume that a program working on a computer with 1GB of memory will also work on a computer with 4GB of memory. A bit check should still work if a long long becomes 128 bits.

How can you be so sure that code making strong assumptions about numbers of bits will happily continue to work as that assumption rug is swept from its foot?

I don't understand why the argument is being made against simply specifying the number of bits, in favor of crossing one's fingers hoping that things won't change. It's not as if it takes additional effort to do things properly.
01 Feb, 2009, quixadhal wrote in the 51st comment:
Votes: 0
It's the same argument people use for continuing to use "\n\r" instead of the correct "\r\n" line endings. My grandpappy coded bits using longs, and it was good enough for him, so I don't see why I should change to this city-boy fancy-pants "unsigned" variables!
01 Feb, 2009, Lobotomy wrote in the 52nd comment:
Votes: 0
quixadhal said:
It's the same argument people use for continuing to use "\n\r" instead of the correct "\r\n" line endings. My grandpappy coded bits using longs, and it was good enough for him, so I don't see why I should change to this city-boy fancy-pants "unsigned" variables!

I'm curious: Who do you see here actually arguing against the use of unsigned variables for bit vectors? I've only asked a question regarding the significance of signed versus unsigned, Scandum only appears to be questioning the practicality of thinking about 128-bit when 64-bit is already far beyond gargantuan by today's standards, and aside from that I don't see anyone else saying anything that comes even anywhere near the "\n\r" discussion.
01 Feb, 2009, Runter wrote in the 53rd comment:
Votes: 0
Everyone is always picking on poor, old Quixadhal.
Lobotomy said:
quixadhal said:
It's the same argument people use for continuing to use "\n\r" instead of the correct "\r\n" line endings. My grandpappy coded bits using longs, and it was good enough for him, so I don't see why I should change to this city-boy fancy-pants "unsigned" variables!

I'm curious: Who do you see here actually arguing against the use of unsigned variables for bit vectors? I've only asked a question regarding the significance of signed versus unsigned, Scandum only appears to be questioning the practicality of thinking about 128-bit when 64-bit is already far beyond gargantuan by today's standards, and aside from that I don't see anyone else saying anything that comes even anywhere near the "\n\r" discussion.
01 Feb, 2009, Scandum wrote in the 54th comment:
Votes: 0
quixadhal said:
It's the same argument people use for continuing to use "\n\r" instead of the correct "\r\n" line endings.

Except that both are correct. This because "\n" is valid, and from an implementation viewpoint there is no difference between "\n\r" or "\nblabla" or "\n\t". It's all valid data. If you'd ask me a mud server should send \n and skip the entire hassle with the \r on both sides of the connection and call it SCP (Scandum's Compression Protocol).
01 Feb, 2009, David Haley wrote in the 55th comment:
Votes: 0
Oh geez, here we go again… Suffice it to say that there's a standard and it should be followed, which makes life easier for everybody. Would it be even easier if everybody just sent \n and that was the standard instead? Well, yes, but until that becomes the standard… well, I'm not going to get into this whole thing again.
01 Feb, 2009, David Haley wrote in the 56th comment:
Votes: 0
By the way, although it makes no difference for display, it makes for a considerable annoyance when trying to detect ends of lines…
01 Feb, 2009, Scandum wrote in the 57th comment:
Votes: 0
The standard is either "\r\n" "\r\0" or "\n", so there's no reason not to use SCP.
01 Feb, 2009, David Haley wrote in the 58th comment:
Votes: 0
Which "standard" are you referring to? Clearly not the telnet standard…
01 Feb, 2009, quixadhal wrote in the 59th comment:
Votes: 0
02 Feb, 2009, Scandum wrote in the 60th comment:
Votes: 0
40.0/77