Kickoff of New SCALEXA Projects
Following last year’s “SCALEXA” funding call from the Federal Ministry of Education and Research (BMBF), new collaborative research projects have recently started in the field of software and technology development in high-performance computing (HPC) in the exascale era. The SCALEXA funding guideline supports HPC as a fundamental research method in various scientific disciplines such as astrophysics, biology, Earth system modelling, and nuclear and particle physics. Engineering and industry also have a growing need for computing power. At the same time, research and applications within the field of artificial intelligence (AI) offer new concepts and perspectives for simulation, modelling, and the analysis of large data volumes.
While current supercomputers in Europe achieve performance in the pre-exaflop range, the first computer in the exascale performance class in Europe – JUPITER – is expected to become available in just 1–2 years from now. A variety of technologies is used in modern HPC systems: from new processors, accelerators, and data storage to file systems and operating systems. Programming large, heterogeneous, and modular systems requires new methods and techniques in software development, mainly through an end-to-end co-design approach. At the same time, application requirements are becoming more heterogeneous. For applications to efficiently exploit the capabilities of exascale systems, code and workflow scalability must be improved, especially in use cases combining classical HPC, AI, and data analytics techniques.
The SCALEXA funding guideline complements the infrastructure development and research activities carried out under the EuroHPC Joint Undertaking (JU). The BMBF funding is intended to ensure a good starting position for German participation in these European activities, while at the same time enabling the efficient use of future exascale systems in Germany. JSC is teaming up with project partners all over Germany in six of the new SCALEXA collaborative research projects, which will run over a three-year period from 2022 to 2025. The projects in which JSC is participating mostly focus on HPC software development.
The objective of the FlexFMM project led by JSC is the realistic simulation of large interacting biomolecules via GROMACS on upcoming exascale hardware. Emphasis is placed on a scalable and flexible Fast Multipole Method (FMM) as an electrostatic solver. Its low communication complexity, dynamic protonation features, and support for non-periodic and highly inhomogeneous particle systems allow for new biomedical developments. Additionally, the project is focused on fully leveraging SiPearl’s upcoming ARM hardware with SVE vector units and HBM memory to future-proof well-established simulation tools and pave the way for exascale.
The goal of the ADAPTEX project, led by the Institute for Software Technology at the German Aerospace Center (DLR), is to develop an open-source software framework for exascale-enabled computational fluid dynamics on dynamic adaptive grids and to apply it to the field of Earth System Modelling (ESM). By merging individually specialized HPC software libraries and extending them to heterogeneous exascale computer architectures, the scalability and resource efficiency of current and future ESM applications will be significantly improved.
The goal of the ExaOcean project, led by the Institute of Mathematics at TU Hamburg, is to accelerate the ICON-O ocean model by at least a factor of four using a combination of classical discrete algorithms and machine learning methods. The innovative spectral deferred correction (SDC) methods applied in the project are remarkable in that they use accelerators and modular supercomputers more efficiently than simple numerical methods alone could do. Beyond the runtime reduction, this approach will enable better scalability on heterogeneous systems without reducing the accuracy and quality of the simulation.
The IFCES2 project, led by the Leibniz Institute for Troposphere Research (TROPOS), will develop new methods to optimize the parallel execution of simulation algorithms from the Earth system model ICON on heterogeneous and modular exascale systems. Code parallelism will be improved and methods will be applied to enable better communication between the individual model components as well as dynamic load distribution. The methods will be validated with use cases from cloud microphysics and ocean biogeochemistry. As a result, the necessary prerequisites will be created for the efficient use of modular exascale systems for complex simulations.
The aim of the MExMeMO project is to develop a multi-scale model for the manufacturing processes of soft materials. These applications pose a major challenge for HPC, as complex processes are coupled at very different scales and the associated simulation techniques place very different demands on the hardware. The new, innovative multi-scale model, which uses a special polymer membrane as an example, includes different size and time scales so that flexible computer architectures can be realized. Furthermore, it builds on the concept of modular supercomputer architecture for exascale computing.
The goal of the StrömungsRaum project, led by the Department of Mathematics at TU Dortmund, is to methodologically extend the CFD software package FEATFLOW on the basis of parallel near-hardware implementations to enable high-scaling, industrial CFD simulations to be run on future exascale architectures. To ensure the optimal use of heterogeneous hardware components, new geometric multigrid solvers and highly scalable nonlinear domain-decomposition methods for CPU and GPU will be developed. In addition, various time-parallel and time-simultaneous approaches will increase the parallel potential of the algorithms.
Contact: Dr. Lars Hoffmann
from JSC News No. 293, 6 December 2022