2000Q1/
<!-- MHonArc v2.4.4 -->
<!--X-Subject: Re: [MUD&#45;Dev] Storing tokens with flex &#38; bison -->
<!--X-From-R13: "Xba O. Znzoreg" <wyflfvapNvk.argpbz.pbz> -->
<!--X-Date: Wed, 19 Jan 2000 06:58:35 &#45;0800 -->
<!--X-Message-Id: 00ea01bf6260$2df9eec0$020101df@JonLambert -->
<!--X-Content-Type: text/plain -->
<!--X-Head-End-->
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<html>
<head>
<title>MUD-Dev message, Re: [MUD-Dev] Storing tokens with flex &amp; bison</title>
<!-- meta name="robots" content="noindex,nofollow" -->
<link rev="made" href="mailto:jlsysinc#ix,netcom.com">
</head>
<body background="/backgrounds/paperback.gif" bgcolor="#ffffff"
      text="#000000" link="#0000FF" alink="#FF0000" vlink="#006000">

  <font size="+4" color="#804040">
    <strong><em>MUD-Dev<br>mailing list archive</em></strong>
  </font>
      
<br>
[&nbsp;<a href="../">Other Periods</a>
&nbsp;|&nbsp;<a href="../../">Other mailing lists</a>
&nbsp;|&nbsp;<a href="/search.php3">Search</a>
&nbsp;]
<br clear=all><hr>
<!--X-Body-Begin-->
<!--X-User-Header-->
<!--X-User-Header-End-->
<!--X-TopPNI-->

Date:&nbsp;
[&nbsp;<a href="msg00155.html">Previous</a>
&nbsp;|&nbsp;<a href="msg00157.html">Next</a>
&nbsp;]
&nbsp;&nbsp;&nbsp;&nbsp;
Thread:&nbsp;
[&nbsp;<a href="msg00148.html">Previous</a>
&nbsp;|&nbsp;<a href="msg00174.html">Next</a>
&nbsp;]
&nbsp;&nbsp;&nbsp;&nbsp;
Index:&nbsp;
[&nbsp;<A HREF="author.html#00156">Author</A>
&nbsp;|&nbsp;<A HREF="#00156">Date</A>
&nbsp;|&nbsp;<A HREF="thread.html#00156">Thread</A>
&nbsp;]

<!--X-TopPNI-End-->
<!--X-MsgBody-->
<!--X-Subject-Header-Begin-->
<H1>Re: [MUD-Dev] Storing tokens with flex &amp; bison</H1>
<HR>
<!--X-Subject-Header-End-->
<!--X-Head-of-Message-->
<UL>
<LI><em>To</em>: &lt;<A HREF="mailto:mud-dev#kanga,nu">mud-dev#kanga,nu</A>&gt;</LI>
<LI><em>Subject</em>: Re: [MUD-Dev] Storing tokens with flex &amp; bison</LI>
<LI><em>From</em>: "Jon A. Lambert" &lt;<A HREF="mailto:jlsysinc#ix,netcom.com">jlsysinc#ix,netcom.com</A>&gt;</LI>
<LI><em>Date</em>: Wed, 19 Jan 2000 04:32:32 -0500</LI>
<LI><em>Reply-To</em>: <A HREF="mailto:mud-dev#kanga,nu">mud-dev#kanga,nu</A></LI>
<LI><em>Sender</em>: <A HREF="mailto:mud-dev-admin#kanga,nu">mud-dev-admin#kanga,nu</A></LI>
</UL>
<!--X-Head-of-Message-End-->
<!--X-Head-Body-Sep-Begin-->
<HR>
<!--X-Head-Body-Sep-End-->
<!--X-Body-of-Message-->
<PRE>
Chris Gray wrote:
&gt;[Jon A. Lambert:]
&gt;
&gt;{My, we're certainly getting into some low level technical stuff here.
&gt;Those not interested, please skip all this! Marian, there will be a
&gt;test on the details tomorrow! :-)}


Warning: This post contains even more graphic and explicit implementation 
details that some might find offensive (or hilarious).  ;-)   

&gt;&gt; BTW, this
&gt;&gt; technique might have been useful in the DevMUD project for dynamically
&gt;&gt; loading and registering new function libraries (DLLs).
&gt;
&gt;Nod. I've done that when I needed the database to be stable, and didn't
&gt;want the key values for the builtins changing. A new major release lets
&gt;me clean 'em up again, however.


I wonder if there be commercial possibilities for mud library writers in 
distributing obfuscated portable byte code.    Gotta have a stable VM
like Java.  Sorry... I'm always thinking about exploitation.  ;-)

&gt;What's your stack size? Mine is currently 8K. That allows 2048 local
&gt;variables, parameters and return addresses - I figure it'll do. Doesn't
&gt;take too long to hit that with deep recursion. Recursive Towers of Hanoi
&gt;has no trouble, however.
&gt;

It's 1K and will grow at 1K increments.  Each variable occupies 12 bytes
on the stack regardless of type.   More on this below.  I'm going to have to
add TOH to my test scripts.  

&gt;Me too on the locals. However, I was aiming for maximum speed here, so was
&gt;happy to relax my style to get it. Here's some snippets of my main byte-
&gt;code interpreter, to show the gcc dynamic goto stuff, along with the
&gt;alternative for non-gcc compilers.


[snip]
&gt;#define BREAK goto *(codeTable[*pc++])     &lt;== funny gcc stuff


Ok I have seen this, or that is an equivalent.   I can only do this with my compiler 
by using the inline assembler. 

&gt;I did the add1/sub1 experiment. It would have taken less than 10 minutes
&gt;if I hadn't left out a couple of 'break's in a code-generation switch. :-(
&gt;
&gt;Unfortunately, there was no noticeable change in run speed with or without
&gt;them. However, with this version of gcc/egcs and RedHat Linux, the tests
&gt;all run slower than on the previous version (same machine). Grrrrrr!


Hmm.. on its face, you would think it would be an operation that would occur
frequently in loop blocks.  A few instructions saved in each iteration should 
in theory add up fast. 

&gt;Sure, I'll buy some of those. The nature of my parse-tree interpreter is
&gt;such that those weren't issues with it, however. I just have a file-static
&gt;variable that says when to get out of any loops, and let the recursive
&gt;interpretation just dribble its way back to the top. 'interpreter.c' in
&gt;my ToyMUD sources is sort-of like that - it uses the return value in the
&gt;recursive calls to say whether or not to quit. As for multiple input
&gt;languages - well, I might have to add one or two new parse-tree node
&gt;types, but that would be about it.


Nod.

&gt;&gt; Ok, I'll pull back the covers.  And your guess is pretty accurate.  ;)
&gt;
&gt;Do I win a cookie? :-)
&gt;

Sure, why not.  I'm going to get long-winded here and maybe you'll toss 
that cookie.  And anyone else who doesn't fall asleep yet, feel free to attack
viciously.  ;-)

For all my talk about readability, my TMVar class is probably one of 
the most obtuses one and does do violence to the OO model.  

If I was really OOey, I'd probably implement a class heirarchy like so:

Var
 |
  \--- Integer
   \--- Float
    \--- String
     \--- List
      \--- Array
       \--- Object
        \--- Error
         \--- Buffer
          \--- Queue
           \--- Method
            \--- KA             

The Var class would wrap and subclass all the possible data types.  
In early versions, I did this and wound up with lots of overhead.  All
that stuff you read about in "1000 reasons why I don't use C++"

So this came to mind:

class TMVar {
   Type type;
   union {
      int       i;
      float     f;
      TMError   e;
      TMString  s;
      TMBuffer  b;
      TMList    l;
      TMQueue   q;
      TMArray   a;
      TMObject  o;
      TMKA     k;
      TMMethod m;
   } value;
}

Note: I use TM as a prefix in front of most classes to avoid namespace 
collisions.  Most of these are self-explanatory types.  I bet a few might make
be curious.   TMMethod is a wrapper object for an online version control system
and TMKA is a special data structure short for "Knowledge Area" which is
intended attached to objects and used by a procedural reasoning system.
It's bytecode that has been compiled by a PRS compiler.  

The big union is reminiscent of Bison/Yacc tokens (and CoolMUD too).  
Such an abomination is of course illegal in C++.  One solution was to use 
pointers to the classes and invoke their respective allocation and construction 
routines.  However, I didn't like this idea since it played havoc with my reference 
counting schemes.  

Anyways I came across this notion of split-phase construction and
split-phase destruction as documented by James Coplien I believe.

class TMVar {
  Type type;
  // Allocation pool - should hold just enough room
  // for class vtable ptr + class member variables
  // All classes have one member variable, a pointer.
  // So two ints should be more than enough
  unsigned char value[8];
}

All my data type classes are 4-bytes in size, 8-bytes if they
have a vtable.   Float, int and ObjectId's fit nicely into 8 bytes
too.

So the new operator is overloaded (short circuited) to return a 
pointer to "value".  It does nothing, ergo no allocation is done
at all, yet the constructors are called.  :-) 

Here it is in my String class:
inline void* TMString::operator new(size_t, void* p)
  { return p; }

And here is the TMVar default constructor:

inline TMVar::TMVar(Type t = NUM) : type(t) {
   switch(t) {
      case NUM:
         *(int*)value = 0;
      case STRING:
         new(value) TMString;
      case LIST:
         new(value) TMList;
      ... etc...
}

And split-phase destruction is not done with 'delete', it's handled by 
calling the destructors directly.

TMVar::~TMVar() {
   switch(type) {
      case STRING:
        ((TMString*)value)-&gt;~TMString();
      case LIST:
        ((TMList*)value)-&gt;~TMList();
      ... etc...
}

And finally all my data type classes are just pointers to the real
information.  I implement copy-on-write reference counting semantics
in all my classes.  

class TMString {
  struct Strdata {
    int     len;    // length of string
    int     mem;    // memory allocated
    int     ref;    // reference count
    char    *str;   // string itself

    Strdata(int _mem = STRING_INITIAL_SIZE) : ref(1), len(0)
    {
      mem = MAX(_mem, STRING_INITIAL_SIZE);
      str = new char[mem];
      str[0] = '\0';
    }
    ~Strdata()
    {
      delete[] str;
      str = 0;
    }
  };
  Strdata *mpValue;  
}

The default constructor is cheap, it just nulls the pointer.  All
the other String functions recognize the null pointer as 
equivalent to a null String.

TMString::TMString() : mpValue(0) {}

And copy constructors are cheap too.  I just increment the
reference counter and share the data.

TMString::TMString(const TMString&amp; r_source_str)
{
  if (r_source_str.mpValue)  {   
    r_source_str.mpValue-&gt;ref++;
    mpValue = r_source_str.mpValue;
  } else {
    mpValue = 0;
  }
}

So there's my explanation for all the fugly casting that's going on below 
and the unusual use of the new() operator.


&gt;&gt; TMVar operator+(const TMVar&amp; v1,const TMVar&amp; v2) throw(Error)
&gt;&gt; {
&gt;&gt;   if ((v1.type != v2.type) &amp;&amp; (v1.type != LIST)) {
&gt;&gt;     throw (E_TYPE);
&gt;&gt;   }
&gt;&gt;   if ((v1.type == ARRAY) || (v1.type == ERROR) || (v1.type == OBJECT)) {
&gt;&gt;     throw (E_TYPE);
&gt;&gt;   }
&gt;&gt;
&gt;&gt;   TMVar ret(v1.type);
&gt;
&gt;As I've mentioned, my C++ is lacking, but I think I'm OK with this. It's
&gt;semantically the same as
&gt;
&gt;    TMVar ret = v1.type;
&gt;
&gt;but without any possibility of a copy constructor call?
&gt;

Aye.  It's the default constructor overriding the default type.
TMVar ret;    // by default creates an integer type variable loaded with 0

In the case of other types, the default constructors create classes with
null pointers (see TMString above)


&gt;&gt; And heres the native code.   I have register optimizations on, but I turned
&gt;&gt; off constructor inlines so it could be followed.  Pardon the line breaks.
&gt;
&gt;OK. Ouch, those exceptions are expensive!
&gt;

Oh yes.  But currently when we hit them we are done for anyway.  I've yet
to implement ignoring and/or catching exceptions in the programming 
language though it is in my spec and at the top of the todo list.   And when 
I do, writing functions that expect exceptions will incur the expense &lt;waah!&gt; :-(

[much snippage]
&gt;&gt;       c:\tychomud\tmvar.cpp, 484: return ret;
&gt;&gt;       00403B44 E871F1FFFF               call  TMVar::TMVar(const TMVar &amp;)
&gt;&gt;       00403B5C E896F2FFFF               call  TMVar::~TMVar()
&gt;
&gt;I think this bit is what I was getting at with my worry about temporaries
&gt;with reference variables. But again, I'm far from an expert. This pair
&gt;of constructor/destructor calls on the 'return' seems to be undesireable.

Yep.  Your criticism is spot on.  Hrrm... That's one of those BIG ones, eh?
Lots of instructions that can be cut out there.   

&gt;If you rewrote the operator routines to take a 3rd reference parameter
&gt;that is where to store the result, would not this extra stuff go away?
&gt;This isn't a reference case here, but perhaps something similar.

That's a good idea.  I can't use the "operator+()" signature since it
won't take a 3rd parameter unless I've missed some C++ trivia.
It really isn't a necessity, just syntactic sugar.  I'd use something like 
"add(left,right,ret)" instead.   &lt;bops head&gt;  Thanks!

&gt;&gt; Passing by reference is the same as passing by pointer as far as the
&gt;&gt; machine code that is generated by the compiler.  Of course, as a general rule,
&gt;&gt; I only use const reference parameters for safety.
&gt;
&gt;Well, from a non C++ expert, my understanding is that it is fairly easy
&gt;to accidentally pass a non-lvalue to a routine that requires a reference
&gt;parameter. The compiler will silently proceed to create the needed lvalue
&gt;by creating a new temporary value of the required type, and doing an
&gt;assigment of the expression to the new temporary. It then passes the
&gt;address of the new temporary to the routine. This is a bug if in this
&gt;particular call, a value stored into that reference parameter matters.
&gt;It is a performance hit because of possible constructor/destructor calls
&gt;on the temporary variable, in addition to the value copy.
&gt;

A made a few mental mistakes early on with casting which are very similar to 
the above.  

Writing things like:
     ((TMList)ret.value).rSetAdd(v2);
Instead of:
     ((TMList*)ret.value)-&gt;rSetAdd(v2);

The former creates a brand new copy of TMList, performs the operation on
it, with the result promptly vanishing into oblivion leading to much head 
scratching and gnashing of teeth.  

Another danger to earlier is in returning references to variables 
created on the stack.  Pretty much the same as the C error of returning
pointers to locals.  I would _never_ever do that.  &lt;grin&gt;

&gt;Just for geek fun, here is the code gcc generated for my addition case.
&gt;This includes the final 'goto' thingy.
&gt;
&gt; CASE(bc_add) {
&gt;     ULONG_T ul;
&gt;
&gt;     ul = *sp++;
&gt;0xc72 &lt;bcRun+1298&gt;: movl   (%esi),%eax
&gt;0xc74 &lt;bcRun+1300&gt;: addl   $0x4,%esi
&gt;     *sp += ul;
&gt;0xc77 &lt;bcRun+1303&gt;: addl   %eax,(%esi)
&gt;     BREAK;
&gt;0xc79 &lt;bcRun+1305&gt;: movzbl (%edi),%eax
&gt;0xc7c &lt;bcRun+1308&gt;: incl   %edi
&gt;0xc7d &lt;bcRun+1309&gt;: jmp    *0x0(,%eax,4)
&gt; }
&gt;
&gt;(This is an unlinked .o file - there would normally be a non-zero offset
&gt;in that last 'jmp' instruction.)
&gt;

Oh that is slick.  Can't beat that.  Only left for you to do is start 
writing code that will execute in parallel.  :-)

--
--*     Jon A. Lambert - TychoMUD Email: jlsysinc#nospam,ix.netcom.com     *--
--*     Mud Server Developer's Page &lt;<A  HREF="http://jlsysinc.home.netcom.com">http://jlsysinc.home.netcom.com</A>&gt;      *--
--* "No Free man shall ever be debarred the use of arms." Thomas Jefferson *--







_______________________________________________
MUD-Dev maillist  -  MUD-Dev#kanga,nu
<A  HREF="http://www.kanga.nu/lists/listinfo/mud-dev">http://www.kanga.nu/lists/listinfo/mud-dev</A>

</PRE>

<!--X-Body-of-Message-End-->
<!--X-MsgBody-End-->
<!--X-Follow-Ups-->
<HR>
<!--X-Follow-Ups-End-->
<!--X-References-->
<!--X-References-End-->
<!--X-BotPNI-->
<UL>
<LI>Prev by Date:
<STRONG><A HREF="msg00155.html">Re: [MUD-Dev] How to handle/display partial language skill</A></STRONG>
</LI>
<LI>Next by Date:
<STRONG><A HREF="msg00157.html">RE: [MUD-Dev] Community Relations</A></STRONG>
</LI>
<LI>Prev by thread:
<STRONG><A HREF="msg00148.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></STRONG>
</LI>
<LI>Next by thread:
<STRONG><A HREF="msg00174.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></STRONG>
</LI>
<LI>Index(es):
<UL>
<LI><A HREF="index.html#00156"><STRONG>Date</STRONG></A></LI>
<LI><A HREF="thread.html#00156"><STRONG>Thread</STRONG></A></LI>
</UL>
</LI>
</UL>

<!--X-BotPNI-End-->
<!--X-User-Footer-->
<!--X-User-Footer-End-->
<ul><li>Thread context:
<BLOCKQUOTE><UL>
<LI><STRONG>Re: [MUD-Dev] Storing tokens with flex &amp; bison</STRONG>, <EM>(continued)</EM>
<ul compact>
<ul compact>
<LI><strong><A NAME="00069" HREF="msg00069.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
Chris Jones <a href="mailto:cjones#v-wave,com">cjones#v-wave,com</a>, Sat 08 Jan 2000, 07:19 GMT
</LI>
</ul>
<LI><strong><A NAME="00116" HREF="msg00116.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
Jon A. Lambert <a href="mailto:jlsysinc#ix,netcom.com">jlsysinc#ix,netcom.com</a>, Tue 18 Jan 2000, 07:01 GMT
</LI>
<LI><strong><A NAME="00145" HREF="msg00145.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
cg <a href="mailto:cg#ami-cg,GraySage.Edmonton.AB.CA">cg#ami-cg,GraySage.Edmonton.AB.CA</a>, Wed 19 Jan 2000, 03:32 GMT
</LI>
<LI><strong><A NAME="00148" HREF="msg00148.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
cg <a href="mailto:cg#ami-cg,GraySage.Edmonton.AB.CA">cg#ami-cg,GraySage.Edmonton.AB.CA</a>, Wed 19 Jan 2000, 04:32 GMT
</LI>
<LI><strong><A NAME="00156" HREF="msg00156.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
Jon A. Lambert <a href="mailto:jlsysinc#ix,netcom.com">jlsysinc#ix,netcom.com</a>, Wed 19 Jan 2000, 14:58 GMT
</LI>
<LI><strong><A NAME="00174" HREF="msg00174.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
cg <a href="mailto:cg#ami-cg,GraySage.Edmonton.AB.CA">cg#ami-cg,GraySage.Edmonton.AB.CA</a>, Thu 20 Jan 2000, 03:04 GMT
</LI>
</ul>
</LI>
</UL></BLOCKQUOTE>

</ul>
<hr>
<center>
[&nbsp;<a href="../">Other Periods</a>
&nbsp;|&nbsp;<a href="../../">Other mailing lists</a>
&nbsp;|&nbsp;<a href="/search.php3">Search</a>
&nbsp;]
</center>
<hr>
</body>
</html>