link to homepage

Institute for Advanced Simulation (IAS)

Navigation and service

JUST - Juelich Storage Cluster

JUST Cluster (Phase 3)Copyright: FZ Jülich

The Juelich Storage Cluster (JUST) at the Juelich Supercomputing Centre (JSC) serves as central GPFS fileserver (General Parallel File System) for the supercomputing systems (e.g. JUQUEEN, JURECA), some dedicated cluster systems (e.g.. DEEP, JUVIS, JUROPA3-partitions), and a data transfer system (JUDAC).

At the time JUST is built by GPFS Storage Servers (GSS) with servers, storage JBODS, and GPFS Native RAID which includes the storage controller part into GPFS. Thereby it uses the declustered RAID technology to deliver not only outstanding throughput, but also extreme data integrity, faster rebuild times, and enhanced data protection. See also the IBM case study about GSS at JSC as well as Lenovo case study about GSS at JSC.

JUST holds a local GPFS cluster with different GPFS file systems dedicated to the kind of usage by the users.

Access to the data from the HPC-systems is in general realized by Remote GPFS functions, in one case combined with CNFS (Clustered Network File System). Additionally there is a NFS-gateway for a special trusted project in cooperation with JSC.

The measured peak performance for applications on JUQUEEN using the fast scratch file system ($WORK) and the dedicated file system ($DATA) on the GSS infrastructure was 160 GB/sec.


Aug-Sep 2015

Installed additional 2 GSS-26 systems and migrated $ARCH file systems; Free GSS-24 systems added to $WORK and $DATA thereby increasing $WORK to 5.3 PB and $DATA to 2.7 PB.

Additional capacity: 4.1 PB gross, 3.3 PB net
Overall capacity: 20.3 PB gross, 16.2 PB net; Bandwidth: 220 GB/s

Jun 2015 I/O reconfiguration of 62 servers by splitting 2 x 30 GE channels into 3 x 20 GE channels to support new JURECA HPC-system.
Jun 2015Upgrade of 20 GSS-24 systems from GSS 1.0 LA (GPFS 3.5.x) to GSS 2.0 (GPFS 4.1.x).
Apr 2015

Increasing capacity of $WORK to 4.4 PB and $DATA to 2.2 PB.

Overall capacity: 16.2 PB gross, 12.9 PB net; Bandwidth: 200 GB/s

Feb-Mar 2015Step by step transition of classical GPFS storage with servers, storage controllers, and disks to 8 GSS-24 systems with GSS 2.0; Migration of /arch and /arch2 file systems to GSS; Deletion of /arch1 file system.
Feb 2015 Increasing default group quotas for $HOME and $WORK to fit for migration of Lustre data to GPFS.
($HOME: 6TB -> 10 TB, 2 M files -> 3 M; $WORK: 20 TB -> 30 TB
Lustre homes: 2755 users, 656 groups, 400 TB, 13.5 M files; Lustre work: 600 TB, 20 M files)
Feb 2015Activated "relatime" for $WORK which checks and sets the access time once a day.
Aug 2014Migration of $HOME and administrative file systems from classical GPFS storage to GSS-24 systems and thereby doubling the capacity of each $HOME.
Jun 2014Integration of the file systems of the classical GPFS cluster into the GSS GPFS cluster and conversion of all GPFS file systems to the same latest level (
May/Jun 2014

Installation and setup with GSS 1.5 of additional three GSS-24 systems for migration of $HOME file systems to GSS infrastructure.

Additional capacity: 2.8 PB gross, 2.2 PB net
Overall capacity: 16.6 PB gross, 13.0 PB net; Bandwidth: 160 GB/s

Feb 2014Overall upgrade to GPFS 3.5.x.
Feb 2014

JUST3: Old scratch file system deleted; Final Power OFF of JUST3 GPFS-server and -storage.

Overall capacity: 13.6 PB gross, 10.8 PB net; Bandwidth: 160 GB/s

Sep 2013Switch the standard scratch file system to be the one at the GSS infrastructure and offer for a limited time access to the old scratch file system in read-only mode only. See Migration to the new enhanced GPFS work file system
Aug 2013Start production at new scratch file system ($WORKNEW) which will become the standard scratch file system on 16 September 2013. See Migration to the new enhanced GPFS work file system
Jun 2013Provide two new file systems at GSS: /worknew a new scratch file system that will replace the /work file system in the nearby future and /data file system dedicated to large projects in collaboration with JSC and on application for quota only.
Jan 2013

JUST4-GSS: Installation and test of new limited available GSS (GPFS Storage Server) systems.

Additional capacity: 9.2 PB gross, 7.4 PB net
Overall capacity: 19.2 PB gross, 15.1 PB net; Bandwidth: 160 GB/s

Dec 2012Partial upgrade to GPFS 3.5.x for special mmbackup application.
Sep 2012Upgrade to GPFS 3.4.x.
May 2012

$WORK capacity and performance doubled by adding all freed JUST3 storage controllers.

Overall capacity: 10 PB gross, 7.7 PB net; Bandwidth: 66 GB/s

Apr 2012$HOME and $ARCH file systems migrated from JUST3 to new JUST4 storage.
Mar 2012

JUST4: Expansion of JUST by additional x-Series servers runnig Linux and new storage controllers IBM DS3512 and DCS3700.

Additional capacity: 4.4 PB gross, 3.4 PB net
Overall capacity: 10 PB gross, 7.7 PB net; Bandwidth: 33 GB/s

Dec 2011End of production for DEISA fileservers.
Nov 2010Upgrade to GPFS 3.3.x.
Dec 2009

JUST3: Expansion of JUST by replacement of storage controllers for GPFS-Data part by IBM DS5300.

Overall Capacity: 5.6 PB gross, 4.3 PB net; Bandwidth: 33 GB/s

Mar 2009JUST2: Expansion of JUST by replacement of Power5 systems by Power6 systems incl. new IBM DS5300 storage controller for GPFS-Metadata part.
Jan 2009Upgrade to GPFS 3.2.x.
Jun 2008GPFS cluster for DEISA file systems added on dedicated DEISA fileservers.
Jul 2007

Start production of JUST: IBM Power5 server systems running AIX with IBM DS4800, DS4700, and DCS9550 storage controllers and GPFS 3.1.x..

Overall Capacity: 1.1 PB gross, 0.86 PB net; Bandwidth: 6-7 GB/s