Parallel-in-Time Integration Methods


The efficient use of modern high performance computing (HPC) systems has become one of the key challenges in computational science. Top HPC architectures already provide million-way concurrency, and current trends suggest that processor counts will continue to grow rapidly. Exploiting these levels of parallelism using traditional techniques for spatial parallelism becomes problematic when, for example, for a fixed problem size communication costs begin to dominate (“strong scaling barrier”) or for increased spatial resolution more time-steps are necessary due to stability constraints (“weak scaling barrier”).

For the numerical solution of time-dependent differential equations, parallel-in-time integration (PinT) methods have recently been shown to provide a promising way to extend these prevailing scaling limits. This research group at JSC focusses primarily on the design, analysis, implementation and optimization of PinT and space-time multilevel methods for extreme scale HPC systems.

Research Topics

  • Parallelization across the steps using multilevel methods
  • Parallelization across the steps using diagonalization
  • Parallelization across the method
  • Fault-tolerant parallel-in-time integration


Dr. Robert Speck


Building 16.3 / Room 309

+49 2461/61-1644


Parallelization across the steps using multilevel methods

To obtain large-scale parallelization in time, various methods such as Parareal or the parallel full approximation scheme in space and time (PFASST) can be used to integrate multiple steps simultaneously. To overcome the inherent serial dependence in the time direction, these approaches typically introduce a space-time hierarchy, where integrators with different costs are coupled in an iterative fashion. Serial dependencies are shifted to the coarsest level, allowing the computationally expensive parts on finer levels to be treated in parallel. These methods show a strong relationship to linear or nonlinear multigrid methods and can be analyzed in a similar way.
Ref: M. L. Minion, R. Speck, M. Bolten, M. Emmett, and D. Ruprecht, Interweaving PFASST and Parallel Multigrid, SIAM Journal on Scientific Computing, 37(5), 244 - 263, 2015.

Parallelization across the steps using diagonalization

In order to avoid coarsening with all its pitfalls, diagonalization-based methods make use of block-circulant preconditioners to parallelize the integration of multiple time-steps. These preconditioners can be diagonalized efficiently using Fast Fourier Transformations (FFT) in time and while this approach works well even for hyperbolic problems, it's direct application is restricted to linear problems. The key question addressed in this field of research is how to obtain efficient parallel integrators for nonlinear problems.
Ref: Gayatri Caklovic, Robert Speck, Martin Frank, A parallel implementation of a diagonalization-based parallel-in-time integrator, arXiv:2103.12571 [math.NA], submitted.

Parallelization across the method

If the application of high-order, multi-stage time integrators is possible or even required, another way to introduce parallelization in time is the usage of stage-parallel integrators. While the potential for parallelism is naturally limited here, the implementation is rather straightforward and the efficiency is usually favorable. Yet, finding good stage-parallel methods is the major challenge. The group primarily focuses on spectral deferred corrections with parallel preconditioners. Both artificial and natural intelligence can be helpful ingredients here.
Ref: Schöbel, R., Speck, R. PFASST-ER: combining the parallel full approximation scheme in space and time with parallelization across the method. Comput. Visual Sci. 23, 12, 2020.

Fault-tolerant parallel-in-time integration

Many PinT methods share features that make them natural candidates for algorithmic-based fault tolerance (ABFT): they hold copies of the (approximate) solution at different times on different processors and they are iterative and/ro hierarchical by nature. Since time stepping is typically the outermost loop for the numerical solution of a time-dependent partial differential equation, protecting it by ABFT covers a larger area of the code. This research is closely related to the application of compression techniques for reducing the memory and communication footprint as well as adaptivity in time.
Ref: Robert Speck, Daniel Ruprecht , Toward fault-tolerant parallel-in-time integration with PFASST, Parallel Computing, Vol.62, 20-37, 2017.

Last Modified: 01.08.2022