JUST - Jülich Storage Cluster
The Jülich Storage Cluster (JUST) at the Jülich Supercomputing Centre (JSC) serves as central storage provider for the supercomputing systems (JUWELS and JURECA), several dedicated cluster systems (JUAMS, JUZEA-1, etc.), and the data transfer system (JUDAC). JUST uses the IBM Spectrum Scale (formerly GPFS) file system technology.
JUST combines multiple storage technologies. The HPC-focused file systems reside on so called building blocks, where two server are connected to 4 or 6 JBODS with harddisks which are managed by the GPFS Native RAID extension from Spectrum Scale. Thereby it uses the declustered RAID technology to deliver not only outstanding throughput, but also extreme data integrity, faster rebuild times, and enhanced data protection. See also the IBM case study about GSS at JSC as well as Lenovo case study about GSS at JSC.
JUST holds a local GPFS cluster with different GPFS file systems dedicated to the kind of usage by the users.
Access to the data from the HPC-systems is in general realized by remote GPFS cluster mounts. Only for projects with special requirements we export dedicated directories by NFS using the GPFS Cluster Export Service (CES) functionality.
The JUST storage system offers a nominal peak I/O bandwidth of 380 GB/s.
News
Date | News |
---|---|
Q1 2019 | JUST-DATA 2nd phase installed. Total capacity: 52PB |
Q4 2018 | Transition to new Jülich usage model. - new filesystems ($HOME, $PROJECT, $FASTDATA, $DATA, $ARCHIVE) - data projects and compute projects |
Q2 2018 | New storage layer JUST-DATA installed (Phase 1). Purpose: large data set & data sharing Capacity: 40 PB groß, Bandwidth: 20GB/s |
Q1 2018 | 5th generation of JUST installed. Data migrated to new storage HW. Overall capacity: 75 PB gross; Bandwidth: 380 GB/s |
Q3 2015 | Additional 2 GSS-26 systems installed, data reorganized ($WORK extended). Additional capacity: 4.1 PB gross, 3.3 PB net |
Q2 2015 | All building blocks now based on GPFS Native RAID (GSS). Increasing capacity of $WORK to 4.4 PB and $DATA to 2.2 PB. Overall capacity: 16.2 PB gross, 12.9 PB net; Bandwidth: 200 GB/s |
Q1 2015 | JUST is now central data filesystem for all HPC systems. Migration of JUROPA Lustre user data to GPFS. Lustre homes: 2755 users, 656 groups, 400 TB, 13.5 M files; Lustre work: 600 TB, 20 M files) |
Q3 2014 | Additional three GSS-24 systems installed for $HOME to prepare Lustre data migration. Additional capacity: 2.8 PB gross, 2.2 PB net |
Q3 2013 | New scratch file system $WORK on the GSS infrastructure in production (See Migration to the new enhanced GPFS work file system) Overall capacity: 13.6 PB gross, 10.8 PB net; Bandwidth: 160 GB/s |
Q2 2013 | Introduce new /data file system dedicated to large projects in collaboration with JSC and on application for quota only. |
Q1 2013 | JUST4-GSS: Installation and test of new limited available GSS (GPFS Storage Server) systems. Additional capacity: 9.2 PB gross, 7.4 PB net |
Q2 2012 | $HOME and $ARCH file systems migrated from JUST3 to new JUST4 storage. $WORK capacity and performance doubled by adding all freed JUST3 storage controllers. Overall capacity: 10 PB gross, 7.7 PB net; Bandwidth: 66 GB/s |
Q1 2012 | JUST4: Expansion of JUST by additional x-Series servers runnig Linux and new storage controllers IBM DS3512 and DCS3700. Additional capacity: 4.4 PB gross, 3.4 PB net |
Q4 2011 | End of production for DEISA fileservers. |
Q4 2009 | JUST3: Expansion of JUST by replacement of storage controllers for GPFS-Data part by IBM DS5300. Overall Capacity: 5.6 PB gross, 4.3 PB net; Bandwidth: 33 GB/s |
Q1 2009 | JUST2: Expansion of JUST by replacement of Power5 systems by Power6 systems incl. new IBM DS5300 storage controller for GPFS-Metadata part. |
Q2 2008 | GPFS cluster for DEISA file systems added on dedicated DEISA fileservers. |
Q2 2007 | Start production of JUST: IBM Power5 server systems running AIX with IBM DS4800, DS4700, and DCS9550 storage controllers and GPFS 3.1.x.. Overall Capacity: 1.1 PB gross, 0.86 PB net; Bandwidth: 6-7 GB/s |