Copyright © 2002 Southern Storm Software, Pty Ltd.
Permission to distribute unmodified copies of this work is hereby granted.
The purpose of this tool is to identify areas of Portable.NET that may need further optimization. It is not intended to compare Portable.NET with other CLR implementations.
Never believe benchmark numbers for engines that you cannot inspect
the source code for. Vendors have been known to modify their engines
specifically to get high scores on benchmarks. If you cannot see what
hacks they've added to the code to lie to the benchmark, then you
shouldn't believe the numbers.
For example, some JIT's get a disproportionately large value for loop
benchmarks, but that is probably due to the JIT optimising the entire
benchmark away, as it has no side effects. Normal applications don't
use loops without side-effects, so such an optimisation is useless
in practical scenarios.
A JIT that lies to a benchmark may appear to be faster, but in fact
will be slower. Lying JIT's spend extra time checking for optimisations
that will not be needed in real applications. This extra checking
slows the JIT down, and so real applications run slower than on
an honest engine.
Never believe comparions between engines from different vendors.
Different implementation techniques lead to different trade-offs.
Benchmarks don't accurately measure these trade-offs.
For example, it is possible for an interpreter to out-perform a JIT
if the application is I/O bound, and the interpreter has been optimised
for I/O operations. A JIT that can perform fantastic loop optimisations
will be useless on such an application.
It is also possible for an interpreter to out-perform a JIT in other ways.
Some JIT's expend a huge amount of memory to create temporary data structures
to assist them with the code conversion process. The overhead of allocating
and managing all of this memory will have a noticeable impact on the
performance of larger applications. Smaller benchmark applications won't
show up this problem.
Finally, remember that most applications spend the bulk of their time
waiting for the user to press a key or move the mouse, or waiting for
a remote process to respond to a request. No amount of fancy optimisations
can speed up the user or the remote machine.
You may be tempted to run PNetMark against the Microsoft CLR. If you
do, you cannot tell the author of the benchmark, or anyone else for
that matter, what the results are. The following is an excerpt from
Microsoft's End User License Agreement (EULA) for their .NET Framework SDK:
2. Where can I get PNetMark?
The latest version of PNetMark can always be found at the
following site:
http://www.southern-storm.com.au/portable_net.html
To build and run PNetMark, you will first need to download and
install the Portable.NET package. Portable.NET is available
from the same location.
3. How do I build PNetMark?
Once you have downloaded a version of PNetMark, it can be unpacked
and built in the usual manner:
This assumes that you already have Portable.NET installed on your
system. If you did not do a "$ gunzip pnetmark-VERSION.tar.gz | tar xvf -
$ cd pnetmark-VERSION
$ ./configure
$ makemake install
" on
Portable.NET, then you can specify the location of the Portable.NET
build trees as follows:
$ ./configure --with-pnet=../pnet-0.5.0 --with-pnetlib=../pnetlib-0.5.0
4. How do I run PNetMark?
Once you have built PNetMark, you can run it from the build tree
as follows:
Higher numbers mean better performance. PNetMark can also be run
manually using one of the following command-line:
$ make check
$ ilrun pnetmark.exe
5. What do the benchmarks really indicate?
PNetMark can be used to compare two different versions of Portable.NET,
running on the same machine, to determine whether a particular code change
makes the engine perform better or worse.6. Vendor X's engine has a higher PNetMark than you. What does that mean?
Basically nothing. Using PNetMark to compare Portable.NET with other
CLR implementations will probably give bogus results. Are they running
on different machines? Are they using different implementation technologies?
Have the engines been optimised to fool the benchmark? There's a million
reasons why two engines will give different results.7. Can I publish PNetMark results?
If you like. But keep in mind the issues discussed in questions
5 and 6 above. The numbers are
only meaningful to compare different versions of Portable.NET running
on the same machine.Performance or Benchmark Testing. You may not disclose the
results of any benchmark test of either the Server Software or
Client Software to any third party without Microsoft's prior
written approval.
Thus, you can run the benchmark if you like, but you must keep the
results to yourself.
8. How do I add a new benchmark?
New benchmarks are provided by classes that implement the
"IBenchmark
" interface. The following methods
and properties must be supplied:
You also need to add a line to "Initialize
Name
Magnification
Run
CleanUp
PNetMark.Run
" to
run the benchmark when requested. e.g.
Finally, add the name of your new source files to "Run(new FooBenchmark());
Makefile.am
"
and rebuild.
Copyright © 2002 Southern Storm Software, Pty Ltd.
Permission to distribute unmodified copies of this work is hereby granted.