Search

link to homepage

Institute for Advanced Simulation (IAS)

Navigation and service


JUROPA / HPC-FF System Configuration

Juropa / HPC-FF ArchitectureJuropa / HPC-FF architecture

JuRoPA Components

  • JUROPA:
    FZJ/GCS production system
    JUMP (IBM Power4 compute cluster) successor
  • HPC-FF:
    High Performance Computing For Fusion
    Dedicated to European Fusion Research Community

Hardware Characteristics

JUROPA

  • 2208 compute nodes:

    • 2 Intel Xeon X5570 (Nehalem-EP) quad-core processors

      • 2.93 GHz
      • SMT (Simultaneous Multithreading)
    • 24 GB memory (DDR3, 1066 MHz)
    • IB QDR HCA (via QNEM Network Express Module)
  • 17664 cores total
  • 207 Teraflops peak performance
  • 183.5 Teraflops Linpack performance
  • Sun Blade 6048 system
  • Infiniband QDR with non-blocking Fat Tree topology
  • Sun Data Center Switch 648

HPC-FF

  • 1080 compute nodes

    • 2 Intel Xeon X5570 (Nehalem-EP) quad-core processors

      • 2.93 GHz
      • SMT (Simultaneous Multithreading)
    • 24 GB memory (DDR3, 1066 MHz)
    • Infiniband Mellanox ConnectX QDR HCA
  • 8640 cores total
  • 101 Teraflops peak performance
  • 87.3 Teraflops Linpack performance
  • Bull NovaScale R422-E2 technology
  • Infiniband QDR with non-blocking Fat Tree topology
  • Mellanox MTS3600 Switch

Complete System

  • 3288 compute nodes
  • 79 TB main memory
  • 26304 cores
  • 308 Teraflops peak performance
  • 274.8 Teraflops Linpack performance (TOP500 list June 2009)

Lustre Storage Pool

  • 4 Meta Data Servers (MDS)

    • 2 x Bull NovaScale R423-E2 (Nehalem-EP quad-core)
    • 2 x Bull NovaScale R423-E2 (Westmere-EP, 6-core)
    • 98 TB for meta data (2 x EMC² CX4-240)
  • 14 Object Storage Servers (OSS) for home file systems

    • Sun Fire X4170 Server
    • 500 TB user data (28 x Sun Storage J4400 Array)
  • 8 Object Storage Servers (OSS) for home file systems

    • 8 x Bull NovaScale R423-E2 (Nehalem-EP quad-core)
    • 500 TB user data (2 x DDN SFA10000 storage)
  • 8 Object Storage Servers (OSS) for scratch file system

    • 8 x Bull NovaScale R423-E2 (Westmere-EP, 6-core)
    • 834 TB user data (2 x DDN SFA10000 storage)
  • Aggregated data rate ~50 GB/s
  • Overall storage capacity: 1.8 PB

Infrastructure Nodes

Login Nodes

  • 12 x Bull NovaScale R423-E2 (8 cores, 24 GB DDR3)

GPFS Gateway Nodes

  • 4 x Bull NovaScale R423-E2 (8 cores, 24 GB DDR3)
  • 2 x Bull NovaScale R423-E2 (12 cores, 192 GB DDR3)

Management Nodes

  • 2 Master Nodes with Master Repository
  • 35 Admin Nodes for scaled system services

Software

  • SUSE SLES 11 Operating System
  • ParaStation Cluster Management

    • GridMonitor
  • Torque/Moab Batch and Resource Management System
  • Intel Professional Fortran, C/C++ Compiler

    • Intel Cluster Tools
  • Intel Math Kernel Library
  • ParTec MPI Message Passing Interface
  • OpenMP Intra-Node Programming Model

Position in Top 500 - History

  • June 2009: 10
  • Nov. 2009: 13
  • June 2010: 14
  • Nov. 2010: 23

Servicemeu

Homepage

Logo

 

 

 

YOUR OPINION MATTERS!

 

Dear visitor,

To make our website suit your needs even more and to give it a more appealing design, we would like you to answer a few short questions.

Answering these questions will take approx. 10 min.

Start now Close window

Thank you for your support!

 

In case you have already taken part in our survey or in case you have no time to take part now, you can simply close the window by clicking "close".

If you have any questions on the survey, please do not hesitate to contact: webumfrage@fz-juelich.de.

 

Your Team at Forschungszentrum Jülich

 

Note: Forschungszentrum Jülich works with the market research institute SKOPOS to anonymously conduct and analyze the survey. SKOPOS complies with the statutory requirements on data protection as well as with the regulations of ADM (Arbeitskreis Deutscher Markt- und Sozialforschungsinstitute e.V.) and ESOMAR (Europäische Gesellschaft für Meinungs- und Marketingforschung). Your data will not be forwarded to third parties.