link to homepage

Institute for Advanced Simulation (IAS)

Navigation and service


a space-time parallel multilevel solver

Scientific area

Time-parallel methods, space-time multilevel algorithms, parallel multigrid.

Short description

The PMG+PFASST code provides a space-time parallel solver for systems of ODEs with linear stiff terms, stemming e.g. from method of lines discretizations of PDEs.

The "parallel full approximation scheme in space and time" (PFASST) joins Parareal-like iterations for time-parallel integration with multilevel spectral deferred correction sweeps in a space-time multigrid fashion. With innovative coarsening strategies in space and time, parallel efficiency can be significantly increased compared to classical Parareal.

For the solution of linear systems in space, which arise from implicit/stiff parts of the ODE, the parallel multigrid methods PMG is applied. The PFASST library primarily uses MPI for communication in time, an experimental on-node implementation with Pthreads is available as well. PMG, on the other hand, relies on MPI for the spatial distribution of degrees of freedom, but first tests with additional OpenMP pragmas show promising results. With the coupling of PFASST and PMG for the benchmark problem, the already impressive strong-scaling capabilities of the spatial solver alone can be increased by a factor of three on the full IBM Blue Gene/Q installation.

PFASST+PMG2D shear layer instability at three different time-steps (horizontal-axis) with three-level coarsening in space and time (vertical-axis) using reduction of temporal nodes, reduction of degrees of freedom in space and reduced discretization order.


  • 458,752 cores on BlueGene/Q (JUQUEEN)
  • 262,144 cores on BlueGene/P (JUGENE)
  • 16,384 cores on Cray XE6

Scaling Plot of PMG+PFASSTSpace-time parallel scaling of PMG+PFASST on JUQUEEN. PMG alone already scales to the full machine using 511^3 degrees of freedom, while the application of PFASST as parallel time stepper gives an additional speedup of 3 on 448K cores.

Programming language and model

  • Fortran 2003 and C
  • MPI and MPI+Pthreads

Tested on platforms

  • BlueGene/Q and /P
  • Cray XE6
  • x86

Application developers and contact


Matthew Emmett
MS 50A-1135, Lawrence Berkeley National Lab
Center for Computational Sciences and Engineering
1 Cyclotron Rd.
Berkeley, CA 94720


Matthias Bolten
University of Wuppertal
Faculty C - Mathematics and Natural Sciences
Department of Mathematics
D-42097 Wuppertal


Robert Speck
Forschungszentrum Jülich GmbH
Jülich Supercomputing Centre
D-52425 Jülich

Daniel Ruprecht, Rolf Krause
Universita della Svizzera italiana
Institute of Computational Science
Via Giuseppe Buffi 13
CH-6900 Lugano









Dear visitor,

To make our website suit your needs even more and to give it a more appealing design, we would like you to answer a few short questions.

Answering these questions will take approx. 10 min.

Start now Close window

Thank you for your support!


In case you have already taken part in our survey or in case you have no time to take part now, you can simply close the window by clicking "close".

If you have any questions on the survey, please do not hesitate to contact:


Your Team at Forschungszentrum Jülich


Note: Forschungszentrum Jülich works with the market research institute SKOPOS to anonymously conduct and analyze the survey. SKOPOS complies with the statutory requirements on data protection as well as with the regulations of ADM (Arbeitskreis Deutscher Markt- und Sozialforschungsinstitute e.V.) and ESOMAR (Europäische Gesellschaft für Meinungs- und Marketingforschung). Your data will not be forwarded to third parties.