Search

link to homepage

Institute for Advanced Simulation (IAS)

Navigation and service


PMG and PFASST

a space-time parallel multilevel solver

Scientific area

Time-parallel methods, space-time multilevel algorithms, parallel multigrid.

Short description

The PMG+PFASST code provides a space-time parallel solver for systems of ODEs with linear stiff terms, stemming e.g. from method of lines discretizations of PDEs.

The "parallel full approximation scheme in space and time" (PFASST) joins Parareal-like iterations for time-parallel integration with multilevel spectral deferred correction sweeps in a space-time multigrid fashion. With innovative coarsening strategies in space and time, parallel efficiency can be significantly increased compared to classical Parareal.

For the solution of linear systems in space, which arise from implicit/stiff parts of the ODE, the parallel multigrid methods PMG is applied. The PFASST library primarily uses MPI for communication in time, an experimental on-node implementation with Pthreads is available as well. PMG, on the other hand, relies on MPI for the spatial distribution of degrees of freedom, but first tests with additional OpenMP pragmas show promising results. With the coupling of PFASST and PMG for the benchmark problem, the already impressive strong-scaling capabilities of the spatial solver alone can be increased by a factor of three on the full IBM Blue Gene/Q installation.

PFASST+PMG2D shear layer instability at three different time-steps (horizontal-axis) with three-level coarsening in space and time (vertical-axis) using reduction of temporal nodes, reduction of degrees of freedom in space and reduced discretization order.

Scalability

  • 458,752 cores on BlueGene/Q (JUQUEEN)
  • 262,144 cores on BlueGene/P (JUGENE)
  • 16,384 cores on Cray XE6

Scaling Plot of PMG+PFASSTSpace-time parallel scaling of PMG+PFASST on JUQUEEN. PMG alone already scales to the full machine using 511^3 degrees of freedom, while the application of PFASST as parallel time stepper gives an additional speedup of 3 on 448K cores.

Programming language and model

  • Fortran 2003 and C
  • MPI and MPI+Pthreads

Tested on platforms

  • BlueGene/Q and /P
  • Cray XE6
  • x86

Application developers and contact

PFASST:

Matthew Emmett
MS 50A-1135, Lawrence Berkeley National Lab
Center for Computational Sciences and Engineering
1 Cyclotron Rd.
Berkeley, CA 94720

mwemmett@lbl.gov

PMG:

Matthias Bolten
University of Wuppertal
Faculty C - Mathematics and Natural Sciences
Department of Mathematics
D-42097 Wuppertal

bolten@math.uni-wuppertal.de


Coupling:

Robert Speck
Forschungszentrum Jülich GmbH
Jülich Supercomputing Centre
D-52425 Jülich

r.speck@fz-juelich.de



Daniel Ruprecht, Rolf Krause
Universita della Svizzera italiana
Institute of Computational Science
Via Giuseppe Buffi 13
CH-6900 Lugano

daniel.ruprecht@usi.ch
rolf.krause@usi.ch


Servicemeu

Homepage