2000Q1/
<!-- MHonArc v2.4.4 -->
<!--X-Subject: Re: [MUD&#45;Dev] Storing tokens with flex &#38; bison -->
<!--X-From-R13: ptNnzv&#45;pt.UenlEntr.Sqzbagba.OP.QO -->
<!--X-Date: Sun, 02 Jan 2000 22:20:12 &#45;0800 -->
<!--X-Message-Id: 200001030616.XAA01992@ami&#45;cg.GraySage.Edmonton.AB.CA -->
<!--X-Content-Type: text/plain -->
<!--X-Head-End-->
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<html>
<head>
<title>MUD-Dev message, Re: [MUD-Dev] Storing tokens with flex &amp; bison</title>
<!-- meta name="robots" content="noindex,nofollow" -->
<link rev="made" href="mailto:cg#ami-cg,GraySage.Edmonton.AB.CA">
</head>
<body background="/backgrounds/paperback.gif" bgcolor="#ffffff"
      text="#000000" link="#0000FF" alink="#FF0000" vlink="#006000">

  <font size="+4" color="#804040">
    <strong><em>MUD-Dev<br>mailing list archive</em></strong>
  </font>
      
<br>
[&nbsp;<a href="../">Other Periods</a>
&nbsp;|&nbsp;<a href="../../">Other mailing lists</a>
&nbsp;|&nbsp;<a href="/search.php3">Search</a>
&nbsp;]
<br clear=all><hr>
<!--X-Body-Begin-->
<!--X-User-Header-->
<!--X-User-Header-End-->
<!--X-TopPNI-->

Date:&nbsp;
[&nbsp;<a href="msg00031.html">Previous</a>
&nbsp;|&nbsp;<a href="msg00033.html">Next</a>
&nbsp;]
&nbsp;&nbsp;&nbsp;&nbsp;
Thread:&nbsp;
[&nbsp;<a href="msg00030.html">Previous</a>
&nbsp;|&nbsp;<a href="msg00045.html">Next</a>
&nbsp;]
&nbsp;&nbsp;&nbsp;&nbsp;
Index:&nbsp;
[&nbsp;<A HREF="author.html#00032">Author</A>
&nbsp;|&nbsp;<A HREF="#00032">Date</A>
&nbsp;|&nbsp;<A HREF="thread.html#00032">Thread</A>
&nbsp;]

<!--X-TopPNI-End-->
<!--X-MsgBody-->
<!--X-Subject-Header-Begin-->
<H1>Re: [MUD-Dev] Storing tokens with flex &amp; bison</H1>
<HR>
<!--X-Subject-Header-End-->
<!--X-Head-of-Message-->
<UL>
<LI><em>To</em>: <A HREF="mailto:mud-dev#kanga,nu">mud-dev#kanga,nu</A></LI>
<LI><em>Subject</em>: Re: [MUD-Dev] Storing tokens with flex &amp; bison</LI>
<LI><em>From</em>: <A HREF="mailto:cg#ami-cg,GraySage.Edmonton.AB.CA">cg#ami-cg,GraySage.Edmonton.AB.CA</A></LI>
<LI><em>Date</em>: Sun, 2 Jan 2000 23:16:55 -0700</LI>
<LI><em>Reply-To</em>: <A HREF="mailto:mud-dev#kanga,nu">mud-dev#kanga,nu</A></LI>
<LI><em>Sender</em>: <A HREF="mailto:mud-dev-admin#kanga,nu">mud-dev-admin#kanga,nu</A></LI>
</UL>
<!--X-Head-of-Message-End-->
<!--X-Head-Body-Sep-Begin-->
<HR>
<!--X-Head-Body-Sep-End-->
<!--X-Body-of-Message-->
<PRE>
[Jon A. Lambert:]

[Oh dear, I'm running on again. Sorry about that!]

&gt; I've noticed that you can obtain lots of little optimization tweaks by just using 
&gt; "lazy evaluation".  That is instead of emitting an opcode as soon as it is 
&gt; identified, pass that opcode along as an integer to the next routine in the 
&gt; parse tree as you begin to evaluate the next token.   Based on the next
&gt; token you may be able to output code that handles that special case
&gt; rather than the general case.  
&gt;
&gt; For instance in the general case, you might evaluate and generate:
&gt; x+y --&gt; pushvar(x) pushvar(y) addop()
&gt; x+1 --&gt; pushvar(x) pushint(1) addop()
&gt;
&gt; The second case can be optimized as a special case if instead of
&gt; immediately emiting pushvar(x), you wait until the next expression is 
&gt; identified.  Thus it could become:
&gt;
&gt; x+1 --&gt; pushvar(x) increment()
&gt; or 
&gt; x+1 --&gt; pushvar(x+1) 
&gt;
&gt; I imagine something similar goes on in C compilers which recognize
&gt; that forms like 'x++' and 'x=x+1' as equivalent.

Normally, you just do the special-case check at the place where you have
all of the needed information. In this case, it would be in the processing
for the '+' operation. I don't have an 'add1' opcode, so this particular
example doesn't work for me, but here is a more generic one, which is
a minor kind of "compile-time expression evaluation":

    case ex_negate:
    case ex_fNegate:
	ex = ex-&gt;ex_v.ex_expressionPtr-&gt;ex_right;
	if (ex-&gt;ex_kind == ex_int) {
	    bcGenConst(- ex-&gt;ex_v.ex_integer);
	} else {
	    bcComp(ex);
	    bcWriteOp(bc_neg);
	}
	break;

If the operand for the unary '-' operator is a constant, then just generate
the negative of that constant, rather than generating the positive
value and then a 'neg' byte-code instruction.

Some compilers do a lot of special cases like this. Others have a lot
of "tree rewriting rules". They work through the built-up parse trees,
applying various rewriting rules wherever their patterns match. The result
is still a valid parse tree, but it has been optimized with a bunch of
special cases.

&gt; Another possible optimization is for the parser to generate code for
&gt; a virtual machine that uses a combination of register and stack instructions.
&gt; The parser can be made smart enough to know to use registers.

I thought about that a bit for my system. If you step back a bit and look
at what a modern CPU is actually doing, you find that the register set is
very much like a stack - they will both end up in the processor's data
cache. The key thing to look at is how many native machine instructions
it takes to execute the sequence you want to look at. With a register
set, you are going to have to extract a register number from the byte-code
stream, and use that to index the array of registers in the virtual machine.
That takes native machine instructions, perhaps more than are needed to
do stack operations, given that many real CPU's have post-increment and
pre-decrement addressing modes that the native compiler can generate.

&gt; And finally, there are optimizations that you just can't do with a RD parser. 
&gt; You can pass the generated byte-code into a post processor that rescans
&gt; it for patterns that can be simplified.  Call it a byte-code optimizer, or
&gt; peephole optimizer.  The only problem with this last bit is that pretty printing
&gt; or reverse disassembly to source becomes rather difficult.  I've noticed a
&gt; lot of mud servers do not do this and seek to support reverse code generation
&gt; instead of storing the source code somewhere.  Perhaps they are missing
&gt; out on some major optimization possibilities.

Those can readily be done in conjunction with recursive descent parsing.
Let's make sure we understand the context here. Recursive descent parsing
normally only means the parsing of the input token sequence into complete
statements and functions understood by the language parser as a whole.
That parser can be tied into direct code generation which emits byte-codes,
or can produce a "parse-tree" data structure.

In the latter case, a recursive walk over that tree will produce a byte-code
stream. That recursive walk is *not* known as recursive descent parsing.
I don't think I've seen the term "recursive descent" used for things other
than parsing, just to avoid the possible confusions.

In any case, optimizations can be done regardless of what parsing or
code-generation techniques are used. In my MUD, I build a parse tree,
because that is the form that non-byte-code-compiled code runs from,
directly. In my Draco compiler for 8080 and later for MC68000, the recursive
descent parser directly emitted native machine code, which was filtered
through a pretty good peephole optimizer. Peephole optimizers, and special
cases like we've been discussing, can make for pretty reasonable generated
code. Nothing like gcc and commercial compilers which do things like
register colouring and full program restructuring, of course!

&gt; &gt;Byte-code execution really only makes sense, I believe, for a strongly
&gt; &gt;typed language. If run-time checks and conversions are needed, they will
&gt; &gt;likely cost far more time than the basic interpretation of the code, and
&gt; &gt;so going to byte-code instead of staying with something more direct, could
&gt; &gt;be a waste of time, in that byte-code won't speed execution up much.

&gt; I don't know about whether it makes sense or not.   You are paying for
&gt; something in the subjective area of "ease of user programming" that is 
&gt; difficult to quantify.  I agree that evaluations at runtime is definitely going to be 
&gt; slower, than compile time evaluation.   I don't agree that it will be slower than 
&gt; interpreted execution though.  Certainly some of the optimizations I talked about 
&gt; above are operative with strong type checking.

Either I'm not understanding what you are getting at, or you are
misinterpreting what I was getting at. I was not referring to evaluation
at compile time (e.g. my example above of the compile-time versus
run-time evaluation of the unary negation operator). I was referring to
the work needed to do type-checking in a non-strongly-typed language.
I understand that lots of people prefer dynamic typing, but my point is
that, having made that choice, you have to put up with the run-time cost
of it, regardless of what kind of run-time you have: direct interpretation
of program source, tree interpretation, byte-code execution, native code
execution. For example, lets say you have a language which does not use
strong typing (variables have no types at parse/compile time). You then
write something like this:

    a = "hello";
    ...
    a = a + func();

Associated with variable 'a', at run-time, must be information about what
type of value is currently stored in the variable. That type information
must be examined in the second line, in order to know what to do with the
'+' operation. If the language uses '+' for string concatenation, and
'func' happens to return a string, then that statement does string
concatenation. If there had been some other assignment to 'a', the
statement might do integer (or perhaps floating point) addition. The
point is that the check for what to do must be made at run-time. The
only way around this that I'm familiar with is that of having a very
smart compiler, which attempts to deduce the correct type, so that it
can get away without the run-time checks. That would be *very* fancy
stuff for a MUD system!

In a strongly typed language, 'a' would have a declared type. If the
parser allowed the second assignment, it would know, at compile time,
exactly what has to happen at run-time, and so would emit byte-code
that does just that. Also, the byte-code interpretation does not have
to do any checking - it just directly does what needs to be done.

Lets follow a simple example in more detail:

    int a, b;
    ...
    a = a + b;

generated bytecode:

    pshl a
    pshl b
    iadd
    popl a

If the byte-code engine has the stack pointer in a local variable called
'sp', then the C code for the 'iadd' byte-code can be just:

    int *sp;
    int temp;
    ...
    temp = *sp++;
    *sp += temp;

(Ignoring any code for fetching opcodes and dispatching to the proper
C code for the byte-codes.)

With a non-strongly-typed language (assuming no smart compiler that can
deduce the types), either the byte-code emitted is considerably more
complex, or the C code for the byte-code engine is considerably more
complex. Either way, it runs slower, perhaps significantly slower.
Remember that in today's processors, conditional branches ('if' statements)
can slow things down a *lot*. You also need to store the current type of
each variable, as well as the current value of it. That adds statements
to your byte-code machine source, and uses more registers/memory.

-- 
Don't design inefficiency in - it'll happen in the implementation.

Chris Gray     cg#ami-cg,GraySage.Edmonton.AB.CA
               <A  HREF="http://www.GraySage.Edmonton.AB.CA/cg/">http://www.GraySage.Edmonton.AB.CA/cg/</A>



_______________________________________________
MUD-Dev maillist  -  MUD-Dev#kanga,nu
<A  HREF="http://www.kanga.nu/lists/listinfo/mud-dev">http://www.kanga.nu/lists/listinfo/mud-dev</A>

</PRE>

<!--X-Body-of-Message-End-->
<!--X-MsgBody-End-->
<!--X-Follow-Ups-->
<HR>
<!--X-Follow-Ups-End-->
<!--X-References-->
<!--X-References-End-->
<!--X-BotPNI-->
<UL>
<LI>Prev by Date:
<STRONG><A HREF="msg00031.html">Re: [MUD-Dev] concerning tokenization, compilation, performance, and other fun stuff.</A></STRONG>
</LI>
<LI>Next by Date:
<STRONG><A HREF="msg00033.html">[MUD-Dev] Library submission notification and updates</A></STRONG>
</LI>
<LI>Prev by thread:
<STRONG><A HREF="msg00030.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></STRONG>
</LI>
<LI>Next by thread:
<STRONG><A HREF="msg00045.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></STRONG>
</LI>
<LI>Index(es):
<UL>
<LI><A HREF="index.html#00032"><STRONG>Date</STRONG></A></LI>
<LI><A HREF="thread.html#00032"><STRONG>Thread</STRONG></A></LI>
</UL>
</LI>
</UL>

<!--X-BotPNI-End-->
<!--X-User-Footer-->
<!--X-User-Footer-End-->
<ul><li>Thread context:
<BLOCKQUOTE><UL>
<LI><STRONG>Re: [MUD-Dev] Storing tokens with flex &amp; bison</STRONG>, <EM>(continued)</EM>
<ul compact>
<ul compact>
<LI><strong><A NAME="00066" HREF="msg00066.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
Phillip Lenhardt <a href="mailto:philen#funky,monkey.org">philen#funky,monkey.org</a>, Fri 07 Jan 2000, 21:56 GMT
<UL>
<LI><strong><A NAME="00067" HREF="msg00067.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
Dominic J. Eidson <a href="mailto:sauron#the-infinite,org">sauron#the-infinite,org</a>, Fri 07 Jan 2000, 22:35 GMT
</LI>
</UL>
</LI>
<LI><strong><A NAME="00120" HREF="msg00120.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
J C Lawrence <a href="mailto:claw#kanga,nu">claw#kanga,nu</a>, Tue 18 Jan 2000, 08:47 GMT
</LI>
</ul>
<LI><strong><A NAME="00030" HREF="msg00030.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
Jon A. Lambert <a href="mailto:jlsysinc#ix,netcom.com">jlsysinc#ix,netcom.com</a>, Mon 03 Jan 2000, 05:23 GMT
</LI>
<LI><strong><A NAME="00032" HREF="msg00032.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
cg <a href="mailto:cg#ami-cg,GraySage.Edmonton.AB.CA">cg#ami-cg,GraySage.Edmonton.AB.CA</a>, Mon 03 Jan 2000, 06:20 GMT
</LI>
<LI><strong><A NAME="00045" HREF="msg00045.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
Jon A. Lambert <a href="mailto:jlsysinc#ix,netcom.com">jlsysinc#ix,netcom.com</a>, Mon 03 Jan 2000, 23:01 GMT
</LI>
<LI><strong><A NAME="00048" HREF="msg00048.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
cg <a href="mailto:cg#ami-cg,GraySage.Edmonton.AB.CA">cg#ami-cg,GraySage.Edmonton.AB.CA</a>, Tue 04 Jan 2000, 07:07 GMT
</LI>
<LI><strong><A NAME="00068" HREF="msg00068.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
cg <a href="mailto:cg#ami-cg,GraySage.Edmonton.AB.CA">cg#ami-cg,GraySage.Edmonton.AB.CA</a>, Sat 08 Jan 2000, 04:27 GMT
<UL>
<LI><strong><A NAME="00069" HREF="msg00069.html">Re: [MUD-Dev] Storing tokens with flex &amp; bison</A></strong>, 
Chris Jones <a href="mailto:cjones#v-wave,com">cjones#v-wave,com</a>, Sat 08 Jan 2000, 07:19 GMT
</LI>
</UL>
</LI>
</ul>
</LI>
</UL></BLOCKQUOTE>

</ul>
<hr>
<center>
[&nbsp;<a href="../">Other Periods</a>
&nbsp;|&nbsp;<a href="../../">Other mailing lists</a>
&nbsp;|&nbsp;<a href="/search.php3">Search</a>
&nbsp;]
</center>
<hr>
</body>
</html>