JUST - Juelich Storage Cluster
The Juelich Storage Cluster (JUST) serves as GPFS fileserver (General Parallel File System) for the supercomputing systems JUQUEEN, JUROPA, JUDGE, and some dedicated cluster systems like JUVIS and DEEP at the Juelich Supercomputing Centre (JSC).
JUST holds a local GPFS cluster with different GPFS file systems dedicated to the kind of usage by the users.
- One part of the cluster is built by classical components like servers, storage controllers, and standard GPFS on top.
- The second part uses the new GPFS Storage Servers (GSS) with servers, storage JBODS, and GPFS Native RAID which includes the storage controller part into GPFS. Thereby it uses the declustered RAID technology to deliver not only outstanding throughput, but also extreme data integrity, faster rebuild times, and enhanced data protection. See also the IBM case study about GSS at JSC.
Access to the data from the HPC systems is in generell realized by Remote GPFS functions, in one case combined with CNFS (Clustered Network File System). Additionally there is a NFS gateway for a special trusted project in cooperation with JSC.
The measured peak performance for applications on JUQUEEN using the fast scratch file system ($WORK) and the dedicated file system ($DATA) on the GSS infrastructure was 160 GB/sec.
|17 Jun 2014||Integration of the file systems of the classical GPFS cluster into the GSS GPFS cluster and conversion of all GPFS file systems to the same latest level (18.104.22.168).|
|May/Jun 2014||Installation and setup of additional three GSS-24 with about 2.2 PB usable storage for future migration of $HOME file systems to GSS infrastructure.|
|13 Feb 2014||Overall upgrade to GPFS 3.5.x|
|6./12. Feb 2014||JUST3: Old scratch file system deleted; |
Final Power OFF of JUST3 GPFS-server and -storage
|16 Sep 2013||Switch the standard scratch file system to be the one at the GSS infrastructure and offer for a limited time access to the old scratch file system in read-only mode only. See Migration to the new enhanced GPFS work file system|
|Aug 2013||Start production at new scratch file system ($WORKNEW) which will become the standard scratch file system on 16 September 2013. See Migration to the new enhanced GPFS work file system|
|Jun 2013||Provide two new file systems at GSS: /worknew a new scratch file system that will replace the /work file system in the nearby future and /data file system dedicated to large projects in collaboration with JSC and on application for quota only|
|Jan 2013||JUST4-GSS: Installation and test of new limited available GSS (GPFS Storage Server) systems; Additional capacity: 9.2 PB gross, 7.4 PB net|
|Dec 2012||Partial upgrade to GPFS 3.5.x for special mmbackup application|
|Sep 2012||Upgrade to GPFS 3.4.x|
|May 2012||$WORK capacity and performance doubled by adding all freed JUST3 storage controllers|
|Apr 2012||$HOME and $ARCH file systems migrated from JUST3 to new JUST4 storage|
|Mar 2012||JUST4: Expansion of JUST by additional xSeries servers and new storage controller IBM DS3512 and DCS3700; Additional capacity: 4.4 PB gross, 3.4 PB net|
|Dec 2011||End of production for DEISA fileservers|
|Nov 2010||Upgrade to GPFS 3.3.x|
|Dec 2009||JUST3: Expansion of JUST by replacement of storage controller for GPFS-Data part by IBM DS5300; Capacity: 5.6 PB gross, 4.3 PB net|
|Mar 2009||JUST2: Expansion of JUST by replacement of Power5 systems by Power6 systems incl. new IBM DS5300 storage controller for GPFS-Metadata part|
|Jan 2009||Upgrade to GPFS 3.2.x|
|Jun 2008||GPFS cluster for DEISA file systems added on dedicated DEISA fileservers|
|Jul 2007||Start production of JUST: IBM Power5 server systems with IBM DS4800, DS4700, and DCS9550 storage controller; GPFS 3.1.x; Capacity: 1.1 PB gross, 0.86 PB net|