Navigation and service


Scientific area

Direct numerical simulation of fine-scale turbulence.

Short description

The hybrid OpenMP/MPI code psOpen has been developed at the Institute for Combustion Technology, RWTH Aachen University, to study incompressible fluid turbulence by means of direct numerical simulations. Direct numerical simulation (DNS) solves the Navier-Stokes equations for all scales down to the smallest length scale present in turbulent flows and provides a complete description of the flow, where the three-dimensional (3D) flow fields are known as a function of space and time. Because of growing computational capabilities DNS of turbulent flows has become an indispensable tool.
For efficiency and accuracy psOpen employs a pseudo-spectral method, where a Fourier transform of the governing equations is solved in spectral space. Here, derivatives in the transport equation turn into multiplications by the wavenumber. All linear terms can be treated in this way. However, the Fourier transformation of the non-linear term turns into a convolution in Fourier space. This operation is computationally very expensive and requires O(N2·3) operations. Therefore, instead of directly evaluating the convolution operation, the multiplication of the non-linear term is computed in real space. This approach requires only O(N3 log N) operations and is called a pseudo-spectral method since only differentiation is performed in Fourier space. A pseudo-spectral method requires frequently transformations between real and spectral space which is particularly challenging for massively-parallel setups.

psOpenDNS of a passive scalar advected by a turbulent velocity field with 4096^3 grid points and Re=530(based on Taylor microscale). Slice of scalar field (left) and scalar dissipation (right).


  • 458,752 cores (1,835,008 compute threads) on BlueGene/Q (JUQUEEN)

Strong scaling of psOpen on JUQUEENStrong scaling of psOpen for four grid sizes between 2048^3 and 8192^3 grid points. Linear scaling is shown for reference. psOpen exhibits an almost linear speedup for up to 16384 compute nodes.

Programming language and model

  • Fortran/C
  • MPI/OpenMP
  • HDF5 for I/O

Tested on platforms

  • BlueGene/Q
  • x86 (LRZ SuperMUC, RWTH Compute Cluster)

Application developers

Dipl.-Ing. Jens Henrik Göbbert
Jülich Aachen Research Alliance - JARA-HPC
former affiliation:
Institute for Combustion Technology, RWTH Aachen University

Dr.-Ing. Michael Gauding
Chair of Numerical Thermofluid Dynamics, TU Freiberg
former affiliation:
Institute for Combustion Technology, RWTH Aachen University


Dipl.-Ing. Jens Henrik Göbbert
Dr.-Ing. Michael Gauding
Prof. Dr.-Ing. mult. Norbert Peters

(Text and images provided by the developers, taken from the Technical Report FZJ-JSC-IB-2015-01)