I mean: when a variable if found,
instead of its CFA - 'LIT PFA' should
be compiled directly. When a constant
is found — 'LIT <value>' should be compiled,
instead of constant's CFA.
Do I miss anything, any eventual problem?
It requires more work in COMPILE, than just doing a ",". But having a user-extensible intelligent COMPILE, (like Gforth) offers a number of advantages, especially for native-code compilers.
...It requires more work in COMPILE, than just doing a ",". But having a
user-extensible intelligent COMPILE, (like Gforth) offers a number of
advantages, especially for native-code compilers.
It's actually unbelievable! All it takes is rather
minor modification in INTERPRET.
So throughout
all these years since 70s FORTH could execute
the programs significantly faster - but they
were all the time selling/giving away the listings
that DIDN'T feature such advantageous change?
And even today the compiler creators don't apply
it, for no particular reason?
It only requires a change to COMPILE,. No change in INTERPRET.
So throughout
all these years since 70s FORTH could execute
the programs significantly faster - but they
were all the time selling/giving away the listings
that DIDN'T feature such advantageous change?
In the 1970s and early 1980s the bigger problem was code size rather
than code performance. And if you compile a variable or constant into
the CFA of the variable, this costs one cell, whereas compiling it
into LIT followed by the address or value costs two cells.
And even today the compiler creators don't apply
it, for no particular reason?
Which compiler creators do you have in mind? Those that compile for
MS-DOS? With 64KB segments, they may prefer to be stingy with the
code size.
In the 1970s and early 1980s the bigger problem was code size rather
than code performance. And if you compile a variable or constant into
the CFA of the variable, this costs one cell, whereas compiling it
into LIT followed by the address or value costs two cells.
Please, have a mercy... :D it's A SINGLE cell you're
talking about.
Even, if (let's assume) the two bytes
may(?) have some meaning during 70s, still in the 80s -
in the era, when 16 KB of RAM, and soon later 64 KB
became de facto standard - it wasn't sane decision(?)
to cripple the compiler(s) by "saving" (literally)
a few bytes.
64 KB is a whole lot compared to "savings"
of (literally) two-byte size per VARIABLE/CONSTANT
definition. Say we've got 200 of them together
in the program — so 400 bytes has been "saved"
at a cost of significantly degraded performance.
As for "significantly degraded performance", as long as you stick with
ITC, my results don't show that.
As for performance, here is what I measure on gforth-itc:
sieve bubble matrix fib fft compile,
0.173 0.187 0.142 0.253 0.085 ,
0.164 0.191 0.134 0.242 0.088 opt-compile,
There is quite a bit of variation between the runs on the Zen4 machine
where I measured this.
It looks like the biggest improvement came from switching
to the benchark engine. What does that mean?
It looks like the biggest improvement came from switching
to the benchark engine. What does that mean?
Backtrace:squared<<<
minforth@gmx.net (minforth) writes:
It looks like the biggest improvement came from switching
to the benchark engine. What does that mean?
It means switching from the ITC interpreter to a faster one
(gforth-fast) that uses a mixture of DTC and native code generation, if
I have it right.
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,064 |
Nodes: | 10 (0 / 10) |
Uptime: | 153:21:57 |
Calls: | 13,691 |
Calls today: | 1 |
Files: | 186,936 |
D/L today: |
2,526 files (731M bytes) |
Messages: | 2,411,055 |