link to homepage

Institute for Advanced Simulation (IAS)

Navigation and service

GPFS File Systems in the Jülich Environment

All user file systems on the HPC systems (e.g. JUQUEEN, JURECA, DEEP), the JUROPA3 partitions (e.g. JUAMS, ZEA, JSC), and the Data Access System (JUDAC) are provided via Multi Cluster GPFS from the HPC-fileserver JUST.

The storage locations assigned to each user in the system environment are encapsulated with the help of shell environment variables (see table). The user's directory in each file system is shared for all systems the user has granted access. It is recommended to organize the data by system architecture specific subdirectories.

The following file systems are available (Login Node or Compute Nodes, Front End Nodes (FENs) or I/O Nodes (IONs))

File SystemUsable Space DescriptionTSM BackupJURECA,JUDAC, JUROPA3,DEEP AccessJUQUEEN Access

Full path to the user's home directory inside GPFS

for source code, binaries, libraries, and applications

yesLogin + ComputeFENs + IONs

Full path to the user's standard scratch directory inside GPFS

temporary storage location for applications with large size and I/O demands;
data are automatically deleted (files after 90 days by modification and access date, empty directories after 3 days)

noLogin + ComputeFENs + IONs

Full path to limited available user's project data directory inside GPFS

storage for large projects in collaboration with JSC;
needed space must be applied for explicitly

yesLogin + ComputeFENs + IONs

Full path to user's archive directory inside GPFS

storage for all files not in use for a longer time;
data are migrated to tape storage by TSM-HSM

yesLogin onlyFENs only
/usr/local53 TBsoftware repository (usage via $PATH, $LD_LIBRARY_PATH,...) yesLogin + ComputeFENs +IONs

All variables will be set during the login process by /etc/profile. It is highly recommended to access files always with the help of these variables.

Details about the different file systems can be found in
What file system to use for different data?

Details on naming conventions and access right rules for FZJ file systems are given in
HPC Data Rules for GPFS.

File system resources will be controlled by quota policy for each group/project. For more information see
What data quotas do exist and how to list usage?

Good practice notes (use tar for lot of small files, don't rename directories)

  • Avoid a lot of small files
    Numerousness small files should be reorganized within tar-archives to avoid long access times due to deficiencies in file processing of the underlaying operating system.

  • Avoid renaming of directories
    Within all file systems that are backed up by TSM (except $WORK), a rename of directories within the data path should be done carefully because all data beyond the changed directory must be backed up once again. If a large amount of data is affected, it prevents backup of really new data in the entire file system and wastes system resources like CPU time, tape storage, and tape units.