The performance model of Java is comparably complex, despite of Java's
reputation of a relatively simple programming language. The compiler and
the runtime system apply complex and ambituous optimization techniques
in order to improve your Java application's performance. The flipside
of the coin is that it is close to impossible to judge performance issues
in Java by intuition. This makes it difficult to write a meaningful
benchmark for comparison of the performance of two algorithms in Java.
In this tutorial we aim to illustrate the "Do's and Don't's" of writing
a micro-benchmark in Java. Especially JVMs with HotSpot technology,
which are the norm these days, offer countless opportunities for making
fatal mistakes. Just to name a typical one: often the code
segment whose performance is supposed to be measured falls victim of the
so-called "dead code elimination", which means: the compiler optimizes
it away. Hence, what is measured is the performance of "nothing".
The affected micro-benchmark will still yield a result, but it is meaningless.
In the tutorial we discuss this and several other mistakes, using a
case study for illustration and derive guidelines for successful micro-benchmarking
in Java. In a workshop setting with opportunity for hands-on labs the attendants
will have a chance to implement a micro-benchmark themselves, thereby exploring
the intricacies of performance measurement in Java. We will also
review a given micro-benchmark, scrutinize and improve it until it yields
meaningful results. |