European Exascale Supercomputer JUPITER Sets New Energy Efficiency Standards with #1 Ranking on Green500

13 May 2024

The first module of the exascale supercomputer JUPITER, named JEDI, is ranked first place in the Green500 list of the most energy-efficient supercomputers worldwide, as announced today by Forschungszentrum Jülich and EuroHPC Joint Undertaking, together with the ParTec-Eviden supercomputer consortium at the International Supercomputing Conference (ISC) in Hamburg. The JUPITER Exascale Development Instrument was installed in April by the German–French consortium and has the same hardware as the JUPITER booster module, which is currently being built at Forschungszentrum Jülich.

Platz 1 im Green500-Ranking: Europäischer Exascale-Supercomputer JUPITER setzt neue Maßstäbe für Energieeffizienz
The JUPITER Exascale Development Instrument (left, orange) at the the Jülich Supercomputing Centre. Copyright: Forschungszentrum Jülich / Ralf-Uwe Limbach

The rapid pace of digitalisation and the increasing use of artificial intelligence requires an increasing amount of computing power and, in turn, energy. Data centres now account for 4 % of German electricity consumption, and this trend is increasing. As a result, efficient computing has become an increasingly important issue in recent years. Research as well as measures to increase energy efficiency have also been on the rise.

Platz 1 im Green500-Ranking: Europäischer Exascale-Supercomputer JUPITER setzt neue Maßstäbe für Energieeffizienz
Compute blade with NVIDIA GH200 Grace Hopper superchip. Copyright Research Centre Jülich

The JUPITER supercomputer procured by the European supercomputing initiative EuroHPC Joint Undertaking is a true pioneer in this field. The first module installed in April, the JUPITER Exascale Development Instrument (JEDI), is capable of 72 billion floating-point operations per second per watt. In contrast, the previous leader achieved around 65 billion.

The decisive factor for the module’s outstanding efficiency is its use of graphics processing units (GPUs) and the fact that it is possible to optimise scientific applications for calculations on GPUs. Today, virtually all leading systems on the Green500 ranking rely heavily on GPUs, which are designed to perform calculations with much greater energy efficiency than conventional central processing units (CPUs).

Platz 1 im Green500-Ranking: Europäischer Exascale-Supercomputer JUPITER setzt neue Maßstäbe für Energieeffizienz
Hot water cooling of the JEDI system from Eviden. Copyright: Research Centre Jülich / Sascha Kreklau

The JEDI development system is one of the first systems in the world to use the latest generation of accelerators from NVIDIA: the NVIDIA GH200 Grace Hopper Superchip, which combines the NVIDIA Hopper GPU and the NVIDIA Grace CPU on a single module. Based on Eviden’s latest BullSequana XH3000 architecture, the system includes its highly efficient hot water cooling system, Direct Liquid Cooling, which requires significantly less energy than conventional air cooling, and allows the heat generated to be reused downstream.

The JUPITER precursor JEDI already has the same equipment as the subsequent JUPITER booster module. Scientists are able to access the hardware at an early stage of development as part of the JUPITER Research and Early Access Program (JUREAP) in order to optimise their codes. In doing so, they are supported by experts from the Jülich Supercomputing Centre.

Platz 1 im Green500-Ranking: Europäischer Exascale-Supercomputer JUPITER setzt neue Maßstäbe für Energieeffizienz
Benedikt von St. Vieth heads the "HPC, Cloud, Data Systems and Services" division, which is responsible for the development and operation of JUPITER.

With JUPITER, energy consumption, in this case green energy, and a possible heat-reuse were important topics from scratch. The hardware offers various facilities for energy optimisation. With JEDI, we are now able to prepare well in advance, and see which parts of these can be utilised by our end-users to optimise their workloads.

Benedikt von St. Vieth, Jülich Supercomputing Centre

Platz 1 im Green500-Ranking: Europäischer Exascale-Supercomputer JUPITER setzt neue Maßstäbe für Energieeffizienz
Dr Andreas Herten, Head of the Accelerating Devices Lab, supports users in optimising their codes for JUPITER.

JEDI provides us an unique opportunity to utilize the actual JUPITER hardware very early in the user enablement process. It allows us to investigate the new hardware platform deeply and optimize user applications for the new features, preparing users well for the larger parts of the system.

Dr. Andreas Herten, Jülich Supercomputing Centre

JUPITER exascale supercomputer

JUPITER is set to be the first supercomputer in Europe to surpass the threshold of one exaflop, which corresponds to one quintillion (“1” followed by 18 zeros) floating-point operations per second. The final system will be installed in stages in the second half of this year, and will initially be made available to scientific users as part of the early access programme before it goes into general user operation at the beginning of 2025.

JUPITER’s enormous computing power will help to push the boundaries of scientific simulations and to train large AI models. The modular exascale system uses the dynamic modular system architecture (dMSA) developed by ParTec and the Jülich Supercomputing Centre. The JUPITER booster module, which is currently installed, will have around 125 BullSequana XH3000 racks and around 24,000 NVIDIA GH200 Superchips, interconnected by NVIDIA Quantum-2 InfiniBand networking. For 8-bit calculations, which are common for training AI models, the computing power is set to increase to well over 70 exaflops. As of today, this would make JUPITER the world’s fastest computer for AI.

According to estimates, JUPITER’s energy requirements will average around 11 megawatts. Further measures will help to use energy even more sustainably. The modular data centre in which JUPITER will be housed is designed to extract the heat generated during cooling and to then use it to heat the buildings on the Forschungszentrum Jülich campus.

All hardware and software components of JUPITER will be installed and managed by the unique JUPITER Management Stack. This is a combination of ParaStation Modulo (ParTec), SMC xScale (Atos/Eviden), and software components from JSC.

JUPITER development system JEDI

The JUPITER development system JEDI is much smaller than the final exascale computer. It consists of a single rack from the latest BullSequana XH3000 series, which currently contains 24 individual computers, known as compute nodes. These are connected to each other via four NVIDIA Quantum-2 InfiniBand switches and will be complemented with 24 additional computing nodes over the course of May.

During measurements for the Green500 ranking of the most energy-efficient supercomputers, the JEDI system achieved a computing power of 4.5 quadrillion floating-point operations per second, or 4.5 petaflops, with an average power consumption of 66 kilowatts. During optimised operation, the power consumption was reduced to 52 kilowatts.

The final JUPITER system as well as JEDI’s funding is provided half by the European Union via EuroHPC, and the other two quarters by the German Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the state of North Rhine-Westphalia (MKW-NRW) through the Gauss Centre for Supercomputing (GCS).

Interview: Background and outlook

The precursor of the European exascale supercomputer JUPITER, is a real pioneer in terms of energy efficiency. In this interview, Prof. Dr. Dr. Thomas Lippert, director of the Jülich Supercomputing Centre, explains what the new Jülich efficiency record is all about.

>>> Interview with Prof. Dr. Dr. Thomas Lippert


Benedikt von St. Vieth

Head of Division HPC, Cloud and Data Systems and Services

  • Institute for Advanced Simulation (IAS)
  • Jülich Supercomputing Centre (JSC)
Building 16.4 /
Room 209
+49 2461/61-9401

Dr. Andreas Herten

Head of ATML Accelerating Devices

  • Institute for Advanced Simulation (IAS)
  • Jülich Supercomputing Centre (JSC)
Building 16.3 /
Room 228
+49 2461/61-1825

Media contact

Tobias Schlößer


    Building 15.3 /
    Room R 3028a
    +49 2461/61-4771

    Last Modified: 14.05.2024