JUBE Benchmarking Environment
Benchmarking a computer system usually involves numerous tasks, involving several runs of different applications. Configuring, compiling, and running a benchmark suite on several platforms with the accompanied tasks of result verification and analysis needs a lot of administrative work and produces a lot of data, which has to be analysed and collected in a central database. Without a benchmarking environment all these steps have to be performed by hand.
For each benchmark application the benchmark data is written out in a certain format that enables the benchmarker to deduct the desired information. This data can be parsed by automatic pre- and post-processing scripts that draw information, and store it more densely for manual interpretation.
The JUBE benchmarking environment provides a script based framework to easily create benchmark sets, run those sets on different computer systems and evaluate the results. It is actively developed by the Jülich Supercomputing Centre of Forschungszentrum Jülich, Germany.
There are two different versions of JUBE available. JUBE version 1 is the older Perl based implementation. JUBE version 2 is a new Python based implementation. Beside the programming language also the command line interface and the input file structure changed. Because of that behaviour we divide the JUBE related information pages into a JUBE 1 and a JUBE 2 part.
If you are interested in upcoming JUBE versions, we recommend to add your mail address to the JUBE-news mailing list.
JUBE Related Pages
Here is a list of all related pages:
|JUBE 2||JUBE 1|
|Release notes||Frequently Asked Questions|
|Copyright and Disclaimer||Download|
|Copyright and Disclaimer|
Here is a list of projects using JUBE:
Distributed European Infrastructure for Supercomputing Applications DEISA
Partnership for Advanced Computing in Europe PRACE