Mathematical Libraries
NAG is a software library of numerical analysis routines developed by "The Numerical Algorithms Group". Areas covered by the library include linear algebra, optimization, quadrature, the solution of ordinary and partial differential equations, regression analysis, and time series analysis.
Installation and use:
The NAG software is available on JURECA Cluster.
It can be loaded through:
module load Stages/2022 Intel/2021.4.0 ParaStationMPI/5.5.0-1[-mt]
module load NAG/Mark28
Further information provides the command:
nag_example -help
A detailed documentation can be found at:
LAPACK
developed by Argonne National Laboratory, supported by the National Science Foundation (NSF) and the United States Department of Energy (DOE)
Library of subroutines for solving dense linear algebra problems efficiently on high-performance computers. Performance issues are addressed by implementing a large number of algorithms in terms of the level 2 and 3 BLAS and by incorporating recent algorithmic improvements for linear algebra computation. BLAS routines have been optimized for single and multiple-processor environments, these algorithms give nearly optimal performance.
GSL
The GNU Scientific Library (GSL) is a numerical library for C and C++ programmers. It is free software under the GNU General Public License.
The library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting. There are over 1000 functions in total with an extensive test suite.
GSL can be used with GCC and Intel compiler.
FFTW
FFTW
developed by Matteo Frigo and Steven G. Johnson at MIT
FFTW is a efficient, multi-threaded C subroutine library with fortran interface for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data (as well as of even/odd data, i.e. the discrete cosine/sine transforms or DCT/DST). There is also a MPI-version available.
GMP
The GNU Multiple Precision Arithmetic Library (GMP) is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers. There is no practical limit to the precision except the ones implied by the available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a regular interface. It is free software under the GNU General Public License.
The main target applications for GMP are cryptography applications and research, Internet security applications, algebra systems, computational algebra research, etc.
ScaLAPACK
Scalable Linear Algebra PACKage, or Scalable LAPACK,
Contributors: University of Tennessee at Knoxville, Oak Ridge National Laboratory, University of California at Berkeley
The ScaLAPACK library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers. It is currently written in a Single-Program-Multiple-Data style using explicit message passing for interprocessor communication. It assumes matrices are laid out in a two-dimensional block cyclic decomposition.
ScaLAPACK is based on the BLACS (Basic Linear Algebra Communication Subroutines). This means that a suitable BLACS library has to be linked, too. Access and usage are explained here in combination with ScaLAPACK.
ARPACK, PARPACK
ARnoldi PACKage, developed at Rice University
ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems.
PARPACK is a parallel version of ARPACK for distributed memory parallel architectures.
PETSc
Portable, Extensible Toolkit for Scientific computation, Argonne National Laboratory (ANL)
PETSc is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations. It employs the MPI standard for all message-passing communication.
Access and Usage:
PETSc version 3.16.3 has been installed on JURECA (on the current default stage: Stages/2022).
It is not available on JURECA Booster.
PETSc/3.16.3 is configured with standard double precision and integer values.
Other configurations can also be installed on request.
Older versions including other configurations are still available on older stages.
Preparations necessary to use PETSc on JURECA
First you have to load the PETSc version you want, for example with the GCC compiler and ParaStationMPI the latest PETSc version can be loaded with:
module load GCC/11.2.0
module load ParaStationMPI/5.5.0-1
module load PETSc/3.16.3
With that command the variables PETSC_DIR and PETSC_ARCH are set.
If you don't want to use the PETSc makefiles make sure that your makefile contains the statements
include $PETSC_DIR/$PETSC_ARCH/lib/petsc/conf/petscvariables
include $PETSC_DIR/$PETSC_ARCH/lib/petsc/conf/petscrules
Examples
Examples are available in $PETSC_DIR/share/petsc/examples/src/vec/vec/tutorials/ex1.c .
To run for instance ex1 from $PETSC_DIR/share/petsc/examples/src/vec/vec/tutorials you have to do the following:
# load the PETSc you want
see “Preparations”
# copy ex1.c and the makefile to the current directory
cp $PETSC_DIR/share/petsc/examples/src/vec/vec/tutorials/ex1.c .
cp $PETSC_DIR/share/petsc/examples/src/vec/vec/tutorials/makefile .
# compile and link the example code
make ex1
To execute the example on 2 processors you have to write a batchfile runex1.exe with the following commands:
#!/bin/bash -x
#SBATCH --nodes=1
#SBATCH --ntasks=2
#SBATCH --partition=dc-cpu
#SBATCH --account=<your project>
srun -np 2 ex1
To submit the batchfile to the slurm batchsystem say
sbatch runex1.exe
In the sources of the examples you find a section //TEST where you can see how the examples have to be executed.
Version:
optimised and debug version
real and complex version
JURECA: 3.16.3, (3.14 with downloads, complex version and version with 8Byte integer, debug versions only in Devel Stage)
JURECA Booster: not availabke
JUWELS: 3.16.3, (3.14 with downloads, complex version and version with 8Byte integer, debug versions only in Devel Stage)
JUSUF: 3.16.3, (3.14 with downloads, complex version and version with 8Byte integer, no debug version)
Documentation: PETSc on ANL WWW server
SLEPc the Scalable Library for Eigenvalue Problem Computations
SLEPc is a software library for the solution of large scale sparse eigenvalue problems on parallel computers. It is an extension of PETSc and can be used for linear eigenvalue problems in either standard or generalized form, with real or complex arithmetic. It can also be used for computing a partial SVD of a large, sparse, rectangular matrix, and to solve nonlinear eigenvalue problems (polynomial or general). Additionally, SLEPc provides solvers for the computation of the action of a matrix function on a vector.
MUMPS
developed by P. R. Amestoy, ENSEEIHT-IRIT, J.-Y. L'Excellent, INRIA Rhône-Alpes, I. S. Duff. CERFACS/RAL, J. Koster, RAL/PARALLAB, M. Tuma, ICS Czech Rep.
MUMPS (MUltifrontal Massively Parallel Solver) is a package for solving linear systems of equations Ax=b, where the matrix A is sparse and can be either unsymmetric, symmetric positive definite, or general symmetric. MUMPS uses a multifrontal technique which is a direct method based on either the L U or the L D transpose(L) factorization of the matrix. The software requires MPI for message passing and makes use of BLAS, BLACS, and ScaLAPACK subroutines.
SPRNG
Scalable Parallel Random Number Generators Library (SPRNG) for ASCI Monte Carlo Computations
A project of Computer Science at Florida State University and Accelerated Strategic Computing Initiative (ASCI)
SPRNG 1.0 offers various SPRNG random number generators each in its own library. For most users this is acceptable, as one rarely uses more than one type of generator in a single program. However, if the user desires this added flexibility SPRNG 5.0 provides it. In all other respects, SPRNG 1.0 and SPRNG 5.0 are identical.
ParMETIS
Parallel Graph Partitioning and Fill-reducing Matrix Ordering
developed in Karypis Lab at the University of Minnesota
ParMETIS is an MPI-based parallel library that implements a variety of algorithms for partitioning unstructured graphs, meshes, and for computing fill-reducing orderings of sparse matrices. ParMETIS extends the functionality provided by METIS and includes routines that are especially suited for parallel AMR computations and large scale numerical simulations. The algorithms implemented in ParMETIS are based on the parallel multilevel k-way graph-partitioning, adaptive repartitioning, and parallel multi-constrained partitioning schemes developed in Karypis Lab.
hypre
high performance preconditioners
is part of the Scalable Linear Solvers project at Lawrence Livermore National Laboratory
The goal of the Scalable Linear Solvers project is to develop scalable algorithms and software for solving large, sparse linear systems of equations on parallel computers. The primary software product is hypre, a library of high performance preconditioners that features parallel multigrid methods for both structured and unstructured grid problems. The problems of interest arise in the simulation codes being developed at LLNL and elsewhere to study physical phenomena in the defense, environmental, energy, and biological sciences.
sundials, SUite of Nonlinear and DIfferential/ALgebraic equation Solvers,
developed in the CASC (Center for Applied Scientific Computing) at Lawrence Livermore National Laboratory
consists of the following solvers:
KINSOL, solves nonlinear algebraic systems.
ELPA
Eigenvalue Solvers for Petaflop-Applications,
Contributors: Rechenzentrum Garching, Fritz-Haber Institut der Max-Planck-Gesellschaft, Max-Planck-Institut für Mathematik in den Naturwissenschaften, Technische Universität München, Bergische Universität Wuppertal, IBM
Elemental
Elemental is an open-source library for distributed-memory dense linear algebra which attempts to strike a careful balance between ease-of-use and high-performance. Leading developer is Jack Poulson, Assistant Professor of Computational Science and Engineering at The Georgia Institute of Technology.
MAGMA Matrix Algebra on GPU and Multicore Architectures