Frequently Asked Questions

What is an exascale supercomputer?

In the supercomputing community, the internationally recognised definition is that an “exascale supercomputer” achieves the performance of at least 1 ExaFLOP/s, i.e. can calculate at least 1 quintillion floating point operations per second. More precisely, the exascale supercomputer exceeds the threshold of 1 quintillion FLOP/s using a suitable benchmark, in particular the Linpack benchmark for the TOP500 list with 64-bit precision.

This corresponds to the definition in Wikipedia:
Exascale computing refers to computing systems capable of calculating at least '1018 IEEE 754 Double Precision (64-bit) operations (multiplications and/or additions) per second (exaFLOPS)'; it is a measure of supercomputer performance.” [1]

Supercomputer manufacturers also use this definition, for example HPE:
Exascale computing systems analyze and solve for 1,000,000,000,000,000,000 floating point operations per second (FLOPS), simulating methods and interactions of the fundamental forces within the universe. These supercomputers are created from the ground up to handle the massive demands of today’s simulation, converged modeling, AI, and analytics workloads.” [2]

The US Department of Energy, responsible for the National Laboratories in the USA, which operate the exascale systems there, also writes:
“Exascale computing is unimaginably faster than that. 'Exa' means 18 zeros. That means an exascale computer can perform more than 1,000,000,000,000,000,000 FLOPS, or 1 exaFLOP.” [3]

The European High Performance Computing Joint Undertaking (EuroHPC JU) distinguishes between exascale, pre-exascale and petascale systems (see figure below). There are currently five petascale supercomputers co-financed by EuroHPC JU and three pre-exascale supercomputers in Europe. Two exascale supercomputers are planned by 2026; the first is JUPITER with a performance of at least 1 exaFLOP/s. [4, 5]

The EuroHPC co-funded pre-exascale supercomputers achieve a maximum theoretical performance of up to 0.5 exaFLOP/s:

  • LUMI (CSC, Finland): 539.13 petaFLOP/s (i.e. ~0.54 exaFLOP/s)
  • LEONARDO (CINECA, Italy): 315.74 petaFLOP/s (~0.32 exaFLOP/s)
  • MARENOSTRUM 5 (Barcelona Supercomputing Centre, Spain): 295.81 petaFLOP/s (~0.3 exaFLOP/s)

MELUXINA (LuxProvide, Luxembourg), the fastest EuroHPC co-funded petascale supercomputer, reaches a maximum performance of 18.29 petaFLOP/s, i.e. just below ~0.02 exaFLOP/s.

Frequently Asked Questions
Overview of the procured EuroHPC JU systems in Europe, with the exascale systems JUPITER and Alice Recoque planned until the end of 2026.

[1] https://en.wikipedia.org/wiki/Exascale_computing
[2] https://www.hpe.com/uk/en/what-is/exascale.html
[3] https://www.energy.gov/science/doe-explainsexascale-computing
[4] https://eurohpc-ju.europa.eu/supercomputers/our-supercomputers_en
[5] https://eurohpc-ju.europa.eu/signature-hosting-agreement-second-european-exascale-supercomputer-alice-recoque-2024-06-21_en

How do you measure the performance of supercomputers?

For a comparable classification of (super)computers, two variables are important: bits and FLOP/s.

The number of bits (i.e. ones and zeros) used internally in the computer to represent (floating point) numbers determines the accuracy with which calculations can be performed. The more bits are used for the internal representation of numbers, the more accurately calculations can be performed with these numbers.

For scientific applications, 64 bits are often used to represent a floating point number. A number is therefore stored internally as a sequence of 64 ones and zeros. Graphics cards (GPUs) can work particularly well with lower precision (8-, 16-, or 32-bit). As supercomputers such as JUPITER provide a large number of GPUs, scientific applications are also increasingly using lower precision than 64 bits if this is sufficient for them. [6]

The unit for measuring and comparing the performance of computers is floating point operations per second, FLOP/s or sometimes also FLOPS for short [7]. This is the number of mathematical operations with two floating point numbers that the computer can perform within one second. Such mathematical operations are, for example, the addition or multiplication of two (floating point) numbers.

As the number of FLOP/s in supercomputers is very large, prefixes from the International System of Units (SI) are used [8] to make them easier to comprehend and compare for humans:

Unit with SI prefixes

Definition

kiloFLOP/s, kFLOP/s

103=1,000 FLOP/s (1000 FLOP/s)

megaFLOP/s, MFLOP/s

106=1,000,000 FLOP/s (1 million FLOP/s)

gigaFLOP/s, GFLOP/s

109=1,000,000,000 FLOP/s (1 billion FLOP/s)

teraFLOP/s, TFLOP/s

1012=1,000,000,000,000 FLOP/s (1 trillion FLOP/s)

petaFLOP/s, PFLOP/s

1015=1,000,000,000,000,000 FLOP/s (1 quadrillion FLOP/s)

exaFLOP/s, EFLOP/s

1018=1,000,000,000,000,000,000 FLOP/s (1 quintillion FLOP/s)

The JUPITER booster module is expected to achieve a performance of at least one exaFLOP/s with 64-bit precision, i.e. at least one quintillion floating point operations per second [9]. The JUWELS booster, currently the fastest system at the Jülich Supercomputing Centre and in Germany, achieves a maximal performance of 73 petaFLOP/s, i.e. 73 quadrillion FLOP/s [10].

[6] https://www.fz-juelich.de/en/ias/jsc/news/news-items/news-flashes/jsc-on-jupiter-the-first-exascale-computer-in-europe-at-wissenschaft-online (in German; video starting at minute 2:50)
[7] https://en.wikipedia.org/wiki/Floating_point_operations_per_second
[8] https://en.wikipedia.org/wiki/International_System_of_Units
[9] https://www.fz-juelich.de/en/ias/jsc/jupiter/tech
[10] https://apps.fz-juelich.de/jsc/hps/juwels/configuration.html

How do you compare the performance of supercomputers?

It is not possible to record and compare the performance of a supercomputer with a single figure, as this depends on many parameters. Therefore, standardised problems, so-called benchmarks, are used [11]. These are usually standardised mathematical problems that the supercomputers have to solve.

To compare the performance of two supercomputers, they run the same benchmark and the number of FLOP/s achieved by the supercomputers is measured, i.e. how many floating point operations per second they can calculate. In simple terms, the larger and more powerful a supercomputer is, the more calculations can be carried out and the higher the FLOP/s measured. Such comparisons therefore do not tell us which system is faster, but which system is faster in solving the selected benchmark.

There are various benchmarks that can be used for comparison. In the supercomputing community, the High-Performance-Linpack benchmark (short HPL, or simply Linpack) has established as the de facto standard. It solves a problem from linear algebra, more precisely a dense system of linear equations. [12]

The TOP500 list, which compares supercomputers using Linpack benchmarks, was published for the first time in 1993. Since then, this list has been updated twice a year, usually in June and November. Supercomputer operators can run the Linpack benchmark on their systems and report the measured parameters for the next TOP500 list.

In addition to the Linpack benchmark, which focuses on computing performance, there are also other benchmarks that are used in the community to evaluate supercomputers. Examples include the HPCG benchmark, which also takes RAM into account, the IO500 benchmark, which takes data storage into account, and MLPerf, which looks at exemplary problems from the field of artificial intelligence.

[11] https://www.top500.org/project/linpack/
[12] https://www.top500.org/resources/frequently-asked-questions/

How do you measure the energy efficiency of supercomputers?

In addition to the TOP500 list, which focuses on the performance in solving a mathematical problem, the Green500 list [13] has also been available since 2007. This compares the energy efficiency of supercomputers in performance per watt, or more precisely in FLOP/s per watt. For this purpose, the FLOP/s measurements from the Linpack benchmark are set in relation to the energy required to operate the supercomputer [14]. JEDI, the first JUPITER module, is number 1 in the current Green500 list (June 2024), making it the most energy-efficient supercomputer in the Linpack benchmark reported for the lists. [15]

[13] https://top500.org/lists/green500/
[14] https://en.wikipedia.org/wiki/Green500
[15] https://top500.org/lists/green500/2024/06/

Last Modified: 18.10.2024