Search

link to homepage

Institute for Advanced Simulation (IAS)

Navigation and service


JURECA

Note: In autumn 2017 the JURECA system will be extended with a scalable booster component. More details about the architecture will be published in due time.

Hardware Characteristics of the Cluster Module

  • 1872 compute nodes

    • Two Intel Xeon E5-2680 v3 Haswell CPUs per node

      • 2 x 12 cores, 2.5 GHz
      • Intel Hyperthreading Technology (Simultaneous Multithreading)
      • AVX 2.0 ISA extension
    • 75 compute nodes equipped with two NVIDIA K80 GPUs (four visible devices per node)

      • 2 x 4992 CUDA cores
      • 2 x 24 GiB GDDR5 memory
    • DDR4 memory technology (2133 MHz)

      • 1605 compute nodes with 128 GiB memory
      • 128 compute nodes with 256 GiB memory
      • 64 compute nodes with 512 GiB memory
  • 12 visualization nodes

    • Two Intel Xeon E5-2680 v3 Haswell CPUs per node
    • Two NVIDIA K40 GPUs per node

      • 2 x 12 GiB GDDR5 memory
    • 10 nodes with 512 GiB memory
    • 2 nodes with 1024 GiB memory
  • Login nodes with 256 GiB memory per node
  • 45,216 CPU cores
  • 1.8 (CPU) + 0.44 (GPU) Petaflop per second peak performance
  • Based on the T-Platforms V-class server architecture
  • Mellanox EDR InfiniBand high-speed network with non-blocking fat tree topology
  • 100 GiB per second storage connection to JUST

Hardware Characteristics of the Booster Module

  • 1640 compute nodes with one Intel Xeon Phi 7250-F Knights Landing CPUs per node

    • 68 cores, 1.4 GHz
    • Intel Hyperthreading Technology (Simultaneous Multithreading)
    • AVX-512 ISA extension
    • 96 GiB memory plus 16 GiB MCDRAM high-bandwidth memory
  • Shared login infrastructure with the cluster module
  • 111,520 CPU cores
  • 5 Petaflop per second peak performance
  • Intel Omni-Path Architecture high-speed network with non-blocking fat tree topology
  • 100+ GiB per second storage connection to JUST

Software Characteristics

  • CentOS 7 Linux distribution
  • Parastation Cluster Management
  • Slurm batch system with Parastation resource management
  • Intel Professional Fortran, C/C++ Compiler

    • Support for OpenMP programming model for intra-node parallelization
  • Intel Math Kernel Library
  • ParTec MPI (Message Passing Interface) Implementation
  • Intel MPI (Message Passing Interface) Implementation
  • IBM General Parallel Filesystem (GPFS) 4.1

Servicemeu

Homepage