Search

link to homepage

Institute for Advanced Simulation (IAS)

Navigation and service


Software on JURECA

Outline

Basic modules usage

Available compilers, MPI runtimes and basic math libraries

GPUs and modules

Finding software packages

Stages

Scientific software at JSC

Requesting new software

Basic modules usage

Loading a module sets environment variables to give you access to a specific set of software and its dependencies. We use a hierarchical organization of  modules to ensure that you get a consistent software stack, e.g., all built with the same compiler version or all relying on the same implementation of MPI.

What this means on JURECA is that there are multiple compilers and MPI runtimes available. As a JURECA user, your first task would be load the desired compiler. The compilers available, as well as some other compiler independent tools, can be listed with the module avail command:

user ~]$ module avail

----------------- Core packages -----------------
Advisor/2018
AllineaForge/7.1
AllineaPerformanceReports/7.1
Atom/1.21.1
Autotools/20150215
Blender/2.79-binary
CFITSIO/3.420
CMake/3.9.4
CUDA/9.0.176 (g)
CVS/1.11.23
Camino/20161122
Cube/4.3.5
Doxygen/1.8.13
[...]
------------------- Compilers -------------------
GCC/5.4.0
GCC/7.2.0 (D)
Intel/2017.5.239-GCC-5.4.0
Intel/2018.0.128-GCC-5.4.0 (D)
PGI/17.9-GCC-5.4.0

------------- Recommended defaults --------------
defaults/CPU

----------------- Architectures -----------------
Architecture/Haswell (S) Architecture/KNL (S,D)

Once you have chosen a compiler you can load it with module load <compiler>:

user ~]$ module load Intel

You can verify which modules you have loaded with module list:

user ~]$ module list

Currently Loaded Modules:
1) icc/.2018.0.128-GCC-5.4.0 (H)
2) ifort/.2018.0.128-GCC-5.4.0 (H)
3) Intel/2018.0.128-GCC-5.4.0
4) GCCcore/.5.4.0 (H)
5) binutils/.2.29 (H)
6) StdEnv (H)

Where:
H: Hidden Module

Note that the module environment loads the dependencies that are needed, even if they are hidden. Loading the Intel compiler will give you access to a set of software compatible with your selection, that again can be listed with module avail:


user ~]$ module avail

-- MPI runtimes available for Intel compilers ---
IntelMPI/2018.0.128 ParaStationMPI/5.2.0-1 (D)
ParaStationMPI/5.2.0-1-mt

---- Packages compiled with Intel compilers -----
Eigen/3.3.4 MPFR/3.1.6
Embree/2.17.0 f90depend/1.5
GEOS/3.6.2-Python-2.7.14 librsb/1.2.0-rc7
HDF/4.2.13 libxsmm/1.8.1
HDF5/1.8.19-serial ncview/2.1.7
Libxc/2.2.3 netCDF-Fortran/4.4.4-serial
METIS/5.1.0 netCDF/4.4.1.1-serial

----------------- Core packages -----------------
Advisor/2018
AllineaForge/7.1
AllineaPerformanceReports/7.1
Atom/1.21.1
Autotools/20150215
Blender/2.79-binary
CFITSIO/3.420
CMake/3.9.4
CUDA/9.0.176 (g)
CVS/1.11.23
Camino/20161122
Cube/4.3.5
Doxygen/1.8.13
[...]

------------------- Compilers -------------------
GCC/5.4.0
GCC/7.2.0 (D)
Intel/2017.5.239-GCC-5.4.0
Intel/2018.0.128-GCC-5.4.0 (L,D)
PGI/17.9-GCC-5.4.0

------------- Recommended defaults --------------
defaults/CPU

----------------- Architectures -----------------
Architecture/Haswell (S) Architecture/KNL (S,D)

Where:
S: Module is Sticky, requires --force to unload or purge
g: Built for GPU
L: Module is loaded
D: Default Module

Among these newly available modules, the most important ones are the MPI runtimes (which appear at the top of the available software). Loading an MPI runtime will again give you access to software built on top of that runtime. Please note that when loading a module where multiple versions are available, the default one is the one with a (D) at its side.

user ~]$ module load ParaStationMPI
user ~]$ module avail

Packages compiled with ParaStationMPI and Intel compilers
ABINIT/8.4.4
ARPACK-NG/3.5.0
ASE/3.15.0-Python-2.7.14
Boost/1.65.1-Python-2.7.14
Boost/1.65.1-Python-3.6.3 (D)
CDO/1.9.1
CGAL/4.11-Python-2.7.14
CGAL/4.11-Python-3.6.3 (D)
CP2K/4.1-plumed-elpa
CPMD/4.1
DOLFIN/2017.1.0-Python-2.7.14
DOLFIN/2017.1.0-Python-3.6.3 (D)
ELPA/2016.05.004-pure-mpi
ELPA/2017.05.002-hybrid
ELPA/2017.05.002-pure-mpi (D)
[...]

-- MPI runtimes available for Intel compilers ---
IntelMPI/2018.0.128
ParaStationMPI/5.2.0-1-mt
ParaStationMPI/5.2.0-1 (L,D)

---- Packages compiled with Intel compilers -----
Eigen/3.3.4 MPFR/3.1.6
Embree/2.17.0 f90depend/1.5
GEOS/3.6.2-Python-2.7.14 librsb/1.2.0-rc7
HDF/4.2.13 libxsmm/1.8.1
HDF5/1.8.19-serial ncview/2.1.7
Libxc/2.2.3 netCDF-Fortran/4.4.4-serial
METIS/5.1.0 netCDF/4.4.1.1-serial

----------------- Core packages -----------------
Advisor/2018
AllineaForge/7.1
AllineaPerformanceReports/7.1
Atom/1.21.1
Autotools/20150215
Blender/2.79-binary
CFITSIO/3.420
CMake/3.9.4
CUDA/9.0.176 (g)
CVS/1.11.23
Camino/20161122
Cube/4.3.5
Doxygen/1.8.13
[...]

------------------- Compilers -------------------
GCC/5.4.0
GCC/7.2.0 (D)
Intel/2017.5.239-GCC-5.4.0
Intel/2018.0.128-GCC-5.4.0 (L,D)
PGI/17.9-GCC-5.4.0

------------- Recommended defaults --------------
defaults/CPU

----------------- Architectures -----------------
Architecture/Haswell (S) Architecture/KNL (S,D)

Where:
S: Module is Sticky, requires --force to unload or purge
g: Built for GPU
L: Module is loaded
D: Default Module

Sometimes, as a user, you simply want to find out which modules you have to load to enable the loading of a particular software package or application. module spider can help you with that task. It will look in the whole hierarchy and report back with specific module combinations to enable the loading of that package:

user ~]$ module spider gromacs

-------------------------------------------------------
GROMACS:
-------------------------------------------------------
Description:
GROMACS is a versatile package to perform
molecular dynamics, i.e. simulate the Newtonian
equations of motion for systems with hundreds to
millions of particles.

Versions:
GROMACS/2016.4-hybrid-plumed

-------------------------------------------------------
For detailed information about a specific "GROMACS" module (including how to load the modules) use the module's full name.
For example:

$ module spider GROMACS/2016.4-hybrid-plumed
-------------------------------------------------------

user ~]$ module spider GROMACS/2016.4-hybrid-plumed

-------------------------------------------------------
GROMACS: GROMACS/2016.4-hybrid-plumed
-------------------------------------------------------
Description:
GROMACS is a versatile package to perform
molecular dynamics, i.e. simulate the Newtonian
equations of motion for systems with hundreds to
millions of particles.

You will need to load all module(s) on any one of the lines below before the "GROMACS/2016.4-hybrid-plumed" module is available to load.

Intel/2018.0.128-GCC-5.4.0 IntelMPI/2018.0.128
Intel/2018.0.128-GCC-5.4.0 ParaStationMPI/5.2.0-1

Help:

Description
===========
GROMACS is a versatile package to perform molecular dynamics,
i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.

More information
================
- Homepage: http://www.gromacs.org
- Site contact: sc@fz-juelich.de

Currently there are more than 700 packages installed per Stage (see Stages). To keep a clean and uncluttered view, a significant number of these packages (mostly helper libraries) are hidden. If you want to see them you can do it with module --show-hidden avail:

user ~]$ module --show-hidden avail

Packages compiled with ParaStationMPI and Intel compiler
[...]
GDAL/.2.2.2 (H)
[...]
GTI/.1.5.0 (H)
[...]
PnMPI/.1.5.0 (H)
[...]

---- Packages compiled with Intel compilers -----
[...]
Libint/.1.1.4 (H)
[...]
UDUNITS/.2.2.25 (H)
[...]

----------------- Core packages -----------------
ANTLR/.2.7.7-Python-2.7.14 (H)
APR-util/.1.6.0 (H)
APR/.1.6.2 (H)
AT-SPI2-ATK/.2.26.0 (H)
AT-SPI2-core/.2.26.0 (H)
ATK/.2.26.0 (H)
[...]

Where:
S: Module is Sticky, requires --force to unload or purge
g: Built for GPU
L: Module is loaded
D: Default Module
H: Hidden Modules

Available compilers, MPI runtimes and basic math libraries

JURECA has 3 major compilers available: GCC, Intel and PGI. The table shows the particular compiler, CUDA, MPI and basic mathematical libraries (BLAS, LAPACK, FFTW, ScaLAPACK) combinations that have been made available in JURECA. Please note the specific versions needed for GCC and Intel compilers to enable GPU software.

CompilerMPICUDAMath libraries
GCC 7.2.0       ParaStationMPI 5.2.0-1 OLF 2017b  
Intel 2018.0.128ParaStationMPI 5.2.0-1 MKL 2018.0.128
Intel 2018.0.128ParaStationMPI 5.2.0-1-mt2 MKL 2018.0.128
Intel 2018.0.128IntelMPI 2018.0.128     MKL 2018.0.128
GCC 5.4.0       MVAPICH2 2.3a-GDR     9.0.176OLF 2017b1
Intel 2017.5.239MVAPICH2 2.3a-GDR     9.0.176MKL 2018.0.128
PGI 17.9MVAPICH2 2.3a-GDR     9.0.176MKL 2018.0.128

1OLF 2017b: OpenBLAS 0.2.20, LAPACK 3.7.1, ScaLAPACK 2.0.2, FFTW 3.3.6

2ParaStationMPI with –mt suffix allows to call the MPI runtime from multiple threads at the same time (MPI_THREAD_MULTIPLE)

GPUs and modules

JURECA has 75 nodes equipped with GPUs. In order to use these GPUs, you have to use a GPU-aware toolchain. Software compatibility imposes certain restrictions on backend compilers. Therefore specific compiler versions are needed to enable GPU usage. These versions are shown in the table in previous section. Software with specific GPU support are marked with a (g) at their side when listing modules. 

Currently the only CUDA-aware MPI runtime available on the system is MVAPICH2. It can be reached loading the compilers listed in the table of the previous section, for which CUDA has been installed. It is important to note that as of today, this MPI runtime will only work on the GPU nodes. However, equivalent software (without GPU support) has been installed for toolchains (combinations of compiler and MPI runtime) that can be used outside of the GPU nodes.

Finding software packages

There are 3 commands that are the main tools to locate software in JURECA:

module avail

module spider <software>

module key <keyword or software>

Normally, the first 2 are enough. Occasionally, module key can be necessary to look for keywords or packages bundled in a single module. An example would be numpy, which is included in the SciPy-Stack module. In the example below, the module environment looks for all occurrences of numpy in the description of the modules. That helps to locate SciPy-Stack.

user ~]$ module key numpy

---------------------------------------------------
The following modules match your search criteria:
"numpy"
---------------------------------------------------

SciPy-Stack: SciPy-Stack/2017b-Python-2.7.14, ...
SciPy Stack is a collection of open source
software for scientific computing in Python.

netcdf4-python: ...
Python/numpy interface to netCDF.

[...]

Additionally, the complete list of software installed can be checked online in the modules browser.

Stages

JURECA will go through major scientific software updates every 6 months (May and November), at the same time that new projects start their allocation time. We call these updates Stages. During these stage switches, the available software will be updated to the latest stable releases. Typically this will require that user applications are recompiled. In such cases, there are two possible solutions:

  1. Load the new versions of the required dependency modules and recompile.
  2. Load the old Stage.

To load the old Stage, users should use these commands:

user ~]$ module use /usr/local/software/jureca/OtherStages

user ~]$ module load Stages/2017a

Then the old software view will become available again as before the stage switch. In the example above the desired Stage was 2015b, but as new stage transitions happen more possibilities will be available.

Scientific software at JSC

JSC provides a significant amount of software installed on its systems. In Scientific Application Software you can have an overview of what is supported and how to use it.

Requesting new software

It is possible to request new software to be installed in JURECA. To do that please send an email to sc@fz-juelich.de, describing which software and version you need. Please note that this will be done on a "best effort" basis and might have limited support.


Servicemeu

Homepage