link to homepage

Institute for Advanced Simulation (IAS)

Navigation and service

Research on parallel-in-time integration methods

With the number of cores in high-end parallel computer systems increasing beyond tens of millions until the end of the decade, classical parallelization strategies for fixed-size problems inevitably reach their intrinsic limits. To utilize present and upcoming massive amount of concurrency, additional directions of parallelism are required. For time-dependent PDEs, time-parallel methods have recently been demonstrated to be a promising approach.

Approaches to parallelize in time can be roughly sorted into three categories [1]: parallel across the method, parallel across the problem and parallel across the steps. A recent contribution to the first category is the “revisionist deferred integral correction” method based on concurrent computation of different stages of a deferred correction scheme [2]. Similar to parallel Runge-Kutta methods, this approach allows using a number of cores equal to the order of the method for parallelization in time. Waveform relaxation is probably the most prominent example of the second category [3]. It decomposes the full space-time system into a set of subsystems and uses an relaxation-based iteration to compute the solution.

The field of time-parallel methods received a large boost from the introduction of the Parareal algorithm [4], which is to date the most widely used “parallel across the steps” approach. It has been shown to be applicable to a variety of applications and different modifications exist to improve robustness or to accelerate convergence. Also, new methods have been developed to overcome Parareal’s rather strict intrinsic bound on parallel efficiency. One promising example, the “parallel full approximation scheme in space and time” (PFASST, [5]), intertwines the iterations of Parareal with those of multilevel spectral deferred correction (SDC) methods in order to achieve significant improvements in speedup and efficiency, see figure below.

Scaling of PEPC and PFASSTSpeedup gained by using PFASST as parallel time-stepper for the space-parallel Barnes-Hut tree code PEPC on IBM Blue Gene/P. Space-parallel runs saturate at 8192 and 2048 cores, respectively, represented by the left-most data point. Using 32x more cores for temporal parallelization reduces time-to-solution by an additional factor of 6-7.

This group at JSC, being part of the cross-sectional team Mathematical Methods and Algorithms, primarily focuses on

  • large-scale applications with Parareal and PFASST on HPC systems
  • mathematical analysis of multigrid SDC and PFASST
  • multiphysics-based coarsening for PFASST
  • space-time-parallel particle simulations (see figure below)
  • time-parallel methods for parabolic equations (as part of the project “EXASOLVERS - Extreme scale solvers for coupled problems” within the DFG German Priority Programme 1648 “Software for Exascale Computing”, SPPEXA)

Combination of PEPC and PFASSTCombination of PEPC and PFASST with PT x PS nodes. Spatial decomposition for each of the PT time slices is performed by PEPC using PS nodes within each PEPC-communicator (depicted as one box in the figure). For PFASST, this structure is duplicated PT times to create independently running instances of PEPC. PFASST connects the ith node of each box to one new MPI communicator, which results in PS separated PFASST communicators for the temporal decomposition.

Further Literature

  1. K. Burrage: Parallel Methods for Systems of Ordinary Differential Equations (1995)
  2. A. Christlieb, C. Macdonald, B. Ong: Parallel High-Order Integrators, SIAM J. Sci. Comput., 32(2), 818–835 (2010)
  3. M. Gander, L. Halpern, F. Nataf: Optimal Schwarz Waveform Relaxation for the One Dimensional Wave Equation, SIAM Journal on Numerical Analysis Vol. 41, 1643-1681 (2004)
  4. J. Lions, Y. Maday, G. Turinici: A “parareal” in time discretization of PDE's, Comptes Rendus de l'Académie des Sciences - Series I - Mathematics, Vol. 332, 661-668 (2001)
  5. M. Emmett, M. Minion: Toward an efficient parallel in time method for partial differential equations, Communications in Applied Mathematics and Computational Science, Vol. 7, 105–132 (2012)