2000Q1/
<!-- MHonArc v2.4.4 -->
<!--X-Subject: Re: [MUD&#45;Dev] Storing tokens with flex &#38; bison -->
<!--X-From-R13: ptNnzv&#45;pt.UenlEntr.Sqzbagba.OP.QO -->
<!--X-Date: Tue, 18 Jan 2000 19:32:48 &#45;0800 -->
<!--X-Message-Id: 200001190320.UAA15217@ami&#45;cg.GraySage.Edmonton.AB.CA -->
<!--X-Content-Type: text/plain -->
<!--X-Head-End-->
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<html>
<head>
<title>MUD-Dev message, Re: [MUD-Dev] Storing tokens with flex &amp; bison</title>
<!-- meta name="robots" content="noindex,nofollow" -->
<link rev="made" href="mailto:cg#ami-cg,GraySage.Edmonton.AB.CA">
</head>
<body background="/backgrounds/paperback.gif" bgcolor="#ffffff"
      text="#000000" link="#0000FF" alink="#FF0000" vlink="#006000">

  <font size="+4" color="#804040">
    <strong><em>MUD-Dev<br>mailing list archive</em></strong>
  </font>
      
<br>
[&nbsp;<a href="../">Other Periods</a>
&nbsp;|&nbsp;<a href="../../">Other mailing lists</a>
&nbsp;|&nbsp;<a href="/search.php3">Search</a>
&nbsp;]
<br clear=all><hr>
<!--X-Body-Begin-->
<!--X-User-Header-->
<!--X-User-Header-End-->
<!--X-TopPNI-->

Date:&nbsp;
[&nbsp;<a href="msg00146.html">Previous</a>
&nbsp;|&nbsp;<a href="msg00148.html">Next</a>
&nbsp;]
&nbsp;&nbsp;&nbsp;&nbsp;
Thread:&nbsp;
[&nbsp;<a href="msg00116.html">Previous</a>
&nbsp;|&nbsp;<a href="msg00148.html">Next</a>
&nbsp;]
&nbsp;&nbsp;&nbsp;&nbsp;
Index:&nbsp;
[&nbsp;<A HREF="author.html#00145">Author</A>
&nbsp;|&nbsp;<A HREF="#00145">Date</A>
&nbsp;|&nbsp;<A HREF="thread.html#00145">Thread</A>
&nbsp;]

<!--X-TopPNI-End-->
<!--X-MsgBody-->
<!--X-Subject-Header-Begin-->
<H1>Re: [MUD-Dev] Storing tokens with flex &amp; bison</H1>
<HR>
<!--X-Subject-Header-End-->
<!--X-Head-of-Message-->
<UL>
<LI><em>To</em>: <A HREF="mailto:mud-dev#kanga,nu">mud-dev#kanga,nu</A></LI>
<LI><em>Subject</em>: Re: [MUD-Dev] Storing tokens with flex &amp; bison</LI>
<LI><em>From</em>: <A HREF="mailto:cg#ami-cg,GraySage.Edmonton.AB.CA">cg#ami-cg,GraySage.Edmonton.AB.CA</A></LI>
<LI><em>Date</em>: Tue, 18 Jan 2000 20:20:12 -0700</LI>
<LI><em>Reply-To</em>: <A HREF="mailto:mud-dev#kanga,nu">mud-dev#kanga,nu</A></LI>
<LI><em>Sender</em>: <A HREF="mailto:mud-dev-admin#kanga,nu">mud-dev-admin#kanga,nu</A></LI>
</UL>
<!--X-Head-of-Message-End-->
<!--X-Head-Body-Sep-Begin-->
<HR>
<!--X-Head-Body-Sep-End-->
<!--X-Body-of-Message-->
<PRE>
[Jon A. Lambert:]

{My, we're certainly getting into some low level technical stuff here.
Those not interested, please skip all this! Marian, there will be a
test on the details tomorrow! :-)}

&gt; I'm familiar with the Z-80, though I'm not sure how closely related it was to the
&gt; 8080.  I know the Z-80 had over 400 instructions (if you count the modes), a
&gt; much much larger set than the 6502.

Z-80 is a superset of the 8080. CP/M stuff (like all my Draco system)
targetted the 8080 stuff only, for maximum compatibility.

&gt; In particular, I recall the 'RST n'  opcode, which is pretty much the way I
&gt; implement calls to built-in functions/library calls in my bytecode.  It allows
&gt; me to add new built-in functions without recompiling old bytecode, as long
&gt; as I don't change 'n' which is the index into a definition table.  BTW, this
&gt; technique might have been useful in the DevMUD project for dynamically
&gt; loading and registering new function libraries (DLLs).

Nod. I've done that when I needed the database to be stable, and didn't
want the key values for the builtins changing. A new major release lets
me clean 'em up again, however.

&gt; Good point.  I could probably remove my check for underflow!  I'm dubious
&gt; about removing the stack overflow check since I allow infinite recursion,
&gt; theoretically at least.  My VM will timeout and cancel task long before any
&gt; ridiculously huge stack size is reached.

What's your stack size? Mine is currently 8K. That allows 2048 local
variables, parameters and return addresses - I figure it'll do. Doesn't
take too long to hit that with deep recursion. Recursive Towers of Hanoi
has no trouble, however.

&gt; I'm not familiar with how the gcc dynamic goto stuff works.  I've included
&gt; my Borland output near the end of this post.  Usually I do include locals
&gt; at the very top of routines, for personal readability.  The only time I embed
&gt; locals in block scope is with the C++ for-loop counters and those that have
&gt; a high construction cost (classes).   I know that it is not necessarily optimal.

Me too on the locals. However, I was aiming for maximum speed here, so was
happy to relax my style to get it. Here's some snippets of my main byte-
code interpreter, to show the gcc dynamic goto stuff, along with the
alternative for non-gcc compilers.

static void
bcRun(QValue_t *pRes)
{
    BYTE_T *pc;
    ULONG_T *sp, *fp;
#ifdef GCC
    static void *(codeTable[]) = {
	&amp;&amp;label_bc_error,			    &lt;== funny gcc syntax
	&amp;&amp;label_bc_ret,
	&amp;&amp;label_bc_ret1,
	&amp;&amp;label_bc_spDec,
	&amp;&amp;label_bc_bi,
	[snip]
	&amp;&amp;label_bc_caseI,
	&amp;&amp;label_bc_caseS
    };
#define CASE(x)	label_ ## x:
#define BREAK	goto *(codeTable[*pc++])	    &lt;== funny gcc stuff
#else
    BcCode_t op;	/* make it enum to try to avoid switch range check */
#define CASE(x) case x:
#define BREAK	break
#endif
    [snip FETCH_AND_INC... macros - they fetch to 'ui', 'ul']

    pc = PC;
    sp = SP;
    fp = FP;
#ifdef GCC
    BREAK;
#else
    while (B_TRUE) {
	op = *pc++ + bc_error;
	switch (op) {
#endif
	CASE(bc_error)
	    PC = pc;
	    runError("error 1 in byte-code execution");
	    return;
	CASE(bc_ret) {
	    ULONG_T ul;
	    UINT_T ui;

	    FETCH_AND_INC_UINT();
	    sp = (ULONG_T *) ((BYTE_T *) sp + ui);  /* pop locals */
	    ul = *sp++;				/* pop return addr */
	    fp = (ULONG_T *) *sp++;		/* pop caller frame pointer */
	    FETCH_AND_INC_UINT();
	    sp = (ULONG_T *) ((BYTE_T *) sp + ui);  /* pop parameters */
	    pc = (BYTE_T *) ul;			/* return */
	    FP = fp;
	    if (sp == StackTop || BcAbort) {
		/* Have returned from an outer call - return from bcRun */
		PC = pc;
		return;
	    }
	    BREAK;
	}

&gt; &gt;I say go for the special-case ones. I have some like 'pshz', 'psh1', etc.
&gt; &gt;They are very easy to do and use, and can have a noticeable speedup. I'm
&gt; &gt;real tempted to go add 'add1' and 'sub1', as in our earlier discussion,
&gt; &gt;to see what effect they have. Should take about 15 minutes.
&gt;
&gt; I tend to agree.  I would note that my weak-typeing of variables prevents
&gt; my use of specialized bytecodes except in the case of constants.  :(

Bummer!

I did the add1/sub1 experiment. It would have taken less than 10 minutes
if I hadn't left out a couple of 'break's in a code-generation switch. :-(

Unfortunately, there was no noticeable change in run speed with or without
them. However, with this version of gcc/egcs and RedHat Linux, the tests
all run slower than on the previous version (same machine). Grrrrrr!

&gt; Sure, but there are other considerations beyond a small(?) performance
&gt; gain that make it "sensible".   Design goals for instance.  Like providing a
&gt; common target for multiple languages.  Implementing and debugging a new
&gt; parser is half the cost of a new interpreter, since the execution piece is
&gt; reusable.  Multi-tasking for another.  It's a good deal easier to interrupt a
&gt; running VM externally and save its state than an executing Interpreter.
&gt; Although I must admit using threads might simplify it.
&gt;
&gt; Or perhaps the idea of safely unwinding the native stack in an interpreter in
&gt; order to flag an error and get back to square one just frightens me, and I find
&gt; it easier to debug something after it has been be decomposed into the
&gt; simplest of functions.  For some reason I associate interpreters with
&gt; monolithic code.

Sure, I'll buy some of those. The nature of my parse-tree interpreter is
such that those weren't issues with it, however. I just have a file-static
variable that says when to get out of any loops, and let the recursive
interpretation just dribble its way back to the top. 'interpreter.c' in
my ToyMUD sources is sort-of like that - it uses the return value in the
recursive calls to say whether or not to quit. As for multiple input
languages - well, I might have to add one or two new parse-tree node
types, but that would be about it.

&gt; Ok, I'll pull back the covers.  And your guess is pretty accurate.  ;)

Do I win a cookie? :-)

&gt; TMVar operator+(const TMVar&amp; v1,const TMVar&amp; v2) throw(Error)
&gt; {
&gt;   if ((v1.type != v2.type) &amp;&amp; (v1.type != LIST)) {
&gt;     throw (E_TYPE);
&gt;   }
&gt;   if ((v1.type == ARRAY) || (v1.type == ERROR) || (v1.type == OBJECT)) {
&gt;     throw (E_TYPE);
&gt;   }
&gt;
&gt;   TMVar ret(v1.type);

As I've mentioned, my C++ is lacking, but I think I'm OK with this. It's
semantically the same as

    TMVar ret = v1.type;

but without any possibility of a copy constructor call?

&gt;   switch (v1.type) {
&gt;     case NUM:
&gt;        *((int*)ret.value) = *((int*)v1.value) + *((int*)v2.value);
&gt;        break;
&gt;     case LIST:
&gt;        new(ret.v) TMList(*(TMList*)v1.value);

Here you've got me, however, and looking at the disassembly didn't help.
I grok the TMList construction and the use of 'new' to create a new object,
but what's the combination do?

&gt; And heres the native code.   I have register optimizations on, but I turned
&gt; off constructor inlines so it could be followed.  Pardon the line breaks.

OK. Ouch, those exceptions are expensive!

[much snippage]
&gt;       c:\tychomud\tmvar.cpp, 484: return ret;
&gt;       00403B37 66C745D85C00             mov   word ptr [ebp-0x28],0x005c
&gt;       00403B3D 8D55F0                   lea   edx,[ebp-0x10]
&gt;       00403B40 52                       push  edx
&gt;       00403B41 FF7508                   push  [ebp+0x08]
&gt;       00403B44 E871F1FFFF               call  TMVar::TMVar(const TMVar &amp;)
&gt;       00403B49 83C408                   add   esp,0x08
&gt;       00403B4C 8B4508                   mov   eax,[ebp+0x08]
&gt;       00403B4F 66C745D86800             mov   word ptr [ebp-0x28],0x0068
&gt;       00403B55 50                       push  eax
&gt;       00403B56 6A02                     push  0x02
&gt;       00403B58 8D55F0                   lea   edx,[ebp-0x10]
&gt;       00403B5B 52                       push  edx
&gt;       00403B5C E896F2FFFF               call  TMVar::~TMVar()
&gt;       00403B61 83C408                   add   esp,0x08
&gt;       00403B64 58                       pop   eax
&gt;       00403B65 66C745D85C00             mov   word ptr [ebp-0x28],0x005c
&gt;       00403B6B FF45E4                   inc   [ebp-0x1c]
&gt;       00403B6E 8B55C8                   mov   edx,[ebp-0x38]
&gt;       00403B71 64891500000000           mov   fs:[0x00000000],edx
&gt;       c:\tychomud\tmvar.cpp, 485: }

I think this bit is what I was getting at with my worry about temporaries
with reference variables. But again, I'm far from an expert. This pair
of constructor/destructor calls on the 'return' seems to be undesireable.
If you rewrote the operator routines to take a 3rd reference parameter
that is where to store the result, would not this extra stuff go away?
This isn't a reference case here, but perhaps something similar.

&gt; Passing by reference is the same as passing by pointer as far as the
&gt; machine code that is generated by the compiler.  Of course, as a general rule,
&gt; I only use const reference parameters for safety.

Well, from a non C++ expert, my understanding is that it is fairly easy
to accidentally pass a non-lvalue to a routine that requires a reference
parameter. The compiler will silently proceed to create the needed lvalue
by creating a new temporary value of the required type, and doing an
assigment of the expression to the new temporary. It then passes the
address of the new temporary to the routine. This is a bug if in this
particular call, a value stored into that reference parameter matters.
It is a performance hit because of possible constructor/destructor calls
on the temporary variable, in addition to the value copy.

Just for geek fun, here is the code gcc generated for my addition case.
This includes the final 'goto' thingy.

	CASE(bc_add) {
	    ULONG_T ul;

	    ul = *sp++;
0xc72 &lt;bcRun+1298&gt;:	movl   (%esi),%eax
0xc74 &lt;bcRun+1300&gt;:	addl   $0x4,%esi
	    *sp += ul;
0xc77 &lt;bcRun+1303&gt;:	addl   %eax,(%esi)
	    BREAK;
0xc79 &lt;bcRun+1305&gt;:	movzbl (%edi),%eax
0xc7c &lt;bcRun+1308&gt;:	incl   %edi
0xc7d &lt;bcRun+1309&gt;:	jmp    *0x0(,%eax,4)
	}

(This is an unlinked .o file - there would normally be a non-zero offset
in that last 'jmp' instruction.)

-- 
Don't design inefficiency in - it'll happen in the implementation.

Chris Gray     cg#ami-cg,GraySage.Edmonton.AB.CA
               <A  HREF="http://www.GraySage.Edmonton.AB.CA/cg/">http://www.GraySage.Edmonton.AB.CA/cg/</A>



_______________________________________________
MUD-Dev maillist  -  MUD-Dev#kanga,nu
<A  HREF="http://www.kanga.nu/lists/listinfo/mud-dev">http://www.kanga.nu/lists/listinfo/mud-dev</A>

</PRE>

<!--X-Body-of-Message-End-->
<!--X-MsgBody-End-->
<!--X-Follow-Ups-->
<HR>
<!--X-Follow-Ups-End-->
<!--X-References-->
<!--X-References-End-->
<!--X-BotPNI-->
<UL>
<LI>Prev by Date:
<STRONG><A HREF="msg00146.html">Re: [MUD-Dev] Community Relations</A></STRONG>
</LI>
<LI>Next by Date:
<STRONG><A HREF="msg00148.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></STRONG>
</LI>
<LI>Prev by thread:
<STRONG><A HREF="msg00116.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></STRONG>
</LI>
<LI>Next by thread:
<STRONG><A HREF="msg00148.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></STRONG>
</LI>
<LI>Index(es):
<UL>
<LI><A HREF="index.html#00145"><STRONG>Date</STRONG></A></LI>
<LI><A HREF="thread.html#00145"><STRONG>Thread</STRONG></A></LI>
</UL>
</LI>
</UL>

<!--X-BotPNI-End-->
<!--X-User-Footer-->
<!--X-User-Footer-End-->
<ul><li>Thread context:
<BLOCKQUOTE><UL>
<LI><STRONG>Re: [MUD-Dev] Storing tokens with flex &amp; bison</STRONG>, <EM>(continued)</EM>
<ul compact>
<LI><strong><A NAME="00048" HREF="msg00048.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
cg <a href="mailto:cg#ami-cg,GraySage.Edmonton.AB.CA">cg#ami-cg,GraySage.Edmonton.AB.CA</a>, Tue 04 Jan 2000, 07:07 GMT
</LI>
<LI><strong><A NAME="00068" HREF="msg00068.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
cg <a href="mailto:cg#ami-cg,GraySage.Edmonton.AB.CA">cg#ami-cg,GraySage.Edmonton.AB.CA</a>, Sat 08 Jan 2000, 04:27 GMT
<UL>
<LI><strong><A NAME="00069" HREF="msg00069.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
Chris Jones <a href="mailto:cjones#v-wave,com">cjones#v-wave,com</a>, Sat 08 Jan 2000, 07:19 GMT
</LI>
</UL>
</LI>
<LI><strong><A NAME="00116" HREF="msg00116.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
Jon A. Lambert <a href="mailto:jlsysinc#ix,netcom.com">jlsysinc#ix,netcom.com</a>, Tue 18 Jan 2000, 07:01 GMT
</LI>
<LI><strong><A NAME="00145" HREF="msg00145.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
cg <a href="mailto:cg#ami-cg,GraySage.Edmonton.AB.CA">cg#ami-cg,GraySage.Edmonton.AB.CA</a>, Wed 19 Jan 2000, 03:32 GMT
</LI>
<LI><strong><A NAME="00148" HREF="msg00148.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
cg <a href="mailto:cg#ami-cg,GraySage.Edmonton.AB.CA">cg#ami-cg,GraySage.Edmonton.AB.CA</a>, Wed 19 Jan 2000, 04:32 GMT
</LI>
<LI><strong><A NAME="00156" HREF="msg00156.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
Jon A. Lambert <a href="mailto:jlsysinc#ix,netcom.com">jlsysinc#ix,netcom.com</a>, Wed 19 Jan 2000, 14:58 GMT
</LI>
<LI><strong><A NAME="00174" HREF="msg00174.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
cg <a href="mailto:cg#ami-cg,GraySage.Edmonton.AB.CA">cg#ami-cg,GraySage.Edmonton.AB.CA</a>, Thu 20 Jan 2000, 03:04 GMT
</LI>
</ul>
</LI>
</UL></BLOCKQUOTE>

</ul>
<hr>
<center>
[&nbsp;<a href="../">Other Periods</a>
&nbsp;|&nbsp;<a href="../../">Other mailing lists</a>
&nbsp;|&nbsp;<a href="/search.php3">Search</a>
&nbsp;]
</center>
<hr>
</body>
</html>