Search

link to homepage

Institute for Advanced Simulation (IAS)

Navigation and service


FAQs for JURECA

Error Messages on JURECA

General FAQs about the Supercomputer Systems

FAQs about Data Management

FAQs about JURECA

How can Intel MPI be tuned for the JURECA InfiniBand network?

By default, Intel MPI uses uDAPL (Direct Access Programming Library) for inter-node MPI communication via the InfiniBand fabric. In many cases better performance can be achieved with the low-level OFED libibverbs library. In order to select the ofa fabric for Intel MPI inter-node communication, export the following environment variable in your job script or salloc session:

export I_MPI_FABRIC=shm:ofa
How can a compute node in an active allocation be accessed?

In order to obtain a shell on a compute node on which a job is currently running the following command can be used.

srun -n 1 -N 1 --jobid <jobid> -r <rel. node number> --cpu_bind=none --gres=gpu:0 --pty /bin/bash -i

Here, <jobid> must be replaced with the id of the allocated job and <rel. node number> specifies the relative node number (starting at zero) within the nodes allocated to the job.

Error Messages on JURECA

What does the error message "vsetenv ... failed" mean?

The warning message

Note: Max supported value for env var is 4095
vsetenv ... failed

is generated by the InfiniBand driver userspace library when an environment variable exceeds the limit of 4095 characters. This warning has no effect on the running job and can safely be ignored until a fix is available.

General FAQs about the Supercomputer Systems

SSH access problem after SSH client update

In OpenSSH 7.0 the support for ssh-dss host and user keys was disabled by default. If you are using a ssh-dss key (the public key starts with "ssh-dss") you will not be able to login to the SC systems by using the default settings after updating your local SSH installation.
In this case a verbose SSH-run

ssh -v <user>@<system>

will display you the following message:

debug1: Skipping ssh-dss key /.../.ssh/id_dsa for not in PubkeyAcceptedKeyTypes

To fix this problem, please upload a new key (using the "ssh-rsa" key format) by using our key upload website: Upload-SSH-Key

How to generate and upload ssh keys?

In order to access the JSC computer systems you need to generate an ssh key pair. This pair consists of a public and a private part. Here we briefly describe how to generate and upload such a pair.

On Linux/UNIX

In order to create a new ssh key pair login to your local machine from where you want to connect to the JSC computer systems. Open a shell and use the following command

ssh-keygen -b 2048 -t rsa

You are asked for a file name and location where the key should be saved. Unless you really know what you are doing, please simply take the default by hitting the enter key. This will generate the ssh key in the .ssh directory of your home directory ($HOME/.ssh).
Next, you are asked for a passphrase. Please, choose a secure passphrase. It should be at least 8 characters long and should contain numbers, letters and special characters like !@#$%^&*().

Important: You are NOT allowed to leave the passphrase empty!

You need to upload the public part of your key ($HOME/.ssh/id_rsa.pub) via the JSC portal JuDoor. You must keep the private part ($HOME/.ssh/id_rsa) confidential.

Important: Do NOT remove it from this location and do NOT rename it!

You will be notified by email once your account is created. You can then upload ssh keys in JuDoor which will become active after a short amount of time. To login, please use

ssh <yourid>@<machine>.fz-juelich.de

where 'yourid' is your user id on the JSC system 'machine' (i.e. you have to replace 'machine' by the corresponding JSC system). You will be prompted for your passphrase of the ssh key which is the one you entered when you generated the key (see above).

On Windows

You can generate the key pair using for example the PuTTYgen tool, which is provided by the PuTTy project. Start PuTTYgen and choose SSH-2 RSA at the bottom of the window and set the 'number of bits in the generated key' to 2048 and press the 'Generate' button.

PuTTYgen will prompt you to generate some randomness by moving the mouse over the blank area. Once this is done, a new public key will be displayed at the top of the window.

Enter a secure passphrase. It should be at least 8 characters long and should contain numbers, letters and special characters like !@#$%^&*().

Important: You are NOT allowed to leave the passphrase empty!

Save the public and the private key. We recommend to use 'id_rsa.pub' for the public and 'id_rsa' for the private part.

The correct public key for the upload can be directly found within the puttygen tool (the .pub file, which can be stored by puttygen uses a different format):

puttygen

You need to upload the public part of your key ($HOME/.ssh/id_rsa.pub) via the JSC portal JuDoor. You must keep the private part (id_rsa) confidential.

You will be notified by email once your account is created. You can then upload ssh keys in JuDoor which will become active after a short amount of time. To login, please use an ssh client for Windows, use authentication method 'public-key', import the key pair you have generated above and login to the corresponding JSC system with your user id. If you are using the PuTTy client you can import the key in the configuration category 'Connection', subcategory 'ssh' -> Auth. Once this is done you will be prompted for your passphrase of the ssh-key which is the one you entered when you generated the key (see above).

Adding additional keys

If you would like to connect to your account from more than one computer, you can create and use additionals pairs of public and private keys:

After creating a pair of public/private keys, please upload it again via JuDoor and don't select the checkbox "Remove all other existing public keys.".

Replace ssh keys

If you would like to put new keys on the system to replace the existing keys, please upload the new key JuDoor and select the checkbox "Remove all other existing public keys.".

Connection problem after creating a new key

It can happen that the new key is not loaded automatically by your local SSH agent (you will receive a permission denied error after you try to connect to the JSC computer system). To update your SSH agent manually you can use the command:

ssh-add <your private key-file>

FAQs about Data Management

How to avoid multiple SSH connections on data transfer?

When transferring multiple files, it can be problematic to use a separate SSH connection for each transfer operation. The network firewall can block a large amount of independent simultaneous SSH connections. There are different options to avoid multiple SSH connections:

Use rsync or use scp with multiple files:

rsync -avhzP local_folder/ username@host:remote_folder

rsync only copies new or changed files, this will reserve transfer bandwith.

scp -r local_folder/ username@host:remote_folder

will copy local_folder recursively

Use tar-container to transfer less files

Creating a tar file and transfer it can be much faster compared to transferring all files separately:

tar -cf tar.file local_folder

The tar file creation, transmission and extraction procress can also be done on the fly:

tar -c local_folder/ | ssh username@host \
'cd remote_folder; tar -x'

Use shared SSH connection

Shared SSH connection allow usage of the same connection multiple times:

Open master connection:

ssh -M -S /tmp/ssh_mux_%h_%p_%r username@host

Reuse connection:

ssh -S /tmp/ssh_mux_%h_%p_%r username@host

A shared connection can also be used when using scp:

scp -o 'ControlPath /tmp/ssh_mux_%h_%p_%r' \
local_folder username@host:remote_folder

How to restore files from the home or project directory?

How to restore user or project data

All data repositories beside the $SCRATCH provides a data protection mechanism based on the IBM Spectrum Protect (TSM) or - for the large file system $DATA - the Spectrum Scale (GPFS) snapshot technology.

Especially for TSM only the JUDAC system is capable of retrieving lost data from the backup by using the command line tool adsmback:

adsmback -type=<target repository>

Don't use the native dsmj-command which will not show any home data.

$HOME - Users personal data

All files within the users home directories ($HOME) are automatically backed up by TSM (Tivoli Storage Manager) function. To restore a file, use

adsmback -type=home &

on JUDAC.

This command grants access to the correct backup data of the user's assigned home directory.

Follow the GUI by selecting:

Restore -> View -> Display active/inactive files
File level -> p> home -> jusers -> userid -> ...
Select files or directories to restore
Press [Restore] button

If the data should be restored to original location then choose within the Restore Destination window

  • Original location

Otherwise select:

  • Following location + <path> + Restore complete path

$PROJECT - Compute project repository

All files within the compute project directories ($PROJECT) are automatically backed up by TSM (Tivoli Storage Manager) function. To restore a file, use

adsmback -type=project &

on JUDAC.

This command grants access to the correct backup data of the project repository.

Follow the GUI by selecting:

Restore -> View -> Display active/inactive files
File level -> p> project -> group -> ...
Select files or directories to restore
Press [Restore] button

If the data should be restored to original location then choose within the Restore Destination window

  • Original location

Otherwise select:

  • Following location + <path> + Restore complete path

$FASTDATA - Data project repository (bandwidth optimized)

All files within the data project directories ($FASTDATA) are automatically backed up by TSM (Tivoli Storage Manager) function. To restore a file, use

adsmback -type=fastdata &

on JUDAC.

This command grants access to the correct backup data of the project repository.

Follow the GUI by selecting:

Restore -> View -> Display active/inactive files
File level -> p> fastdata -> group -> ...
Select files or directories to restore
Press [Restore] button

If the data should be restored to original location then choose within the Restore Destination window

  • Original location

Otherwise select:

  • Following location + <path> + Restore complete path

$DATA - Data project repository (large capacity)

The files within the data project directories ($DATA) are not backed up by TSM (Tivoli Storage Manager) function. Here we use the snapshot feature from the file system (GPFS). The difference between the TSM backup and the snapshot based backup is, that TSM act on file changes while snapshots save the state at a certain point in time. Right now we have configured:

daily backuplast three retentiontoday, just after midnight
weekly backuplast three retentionevery Sunday, just after midnight
monthly backuplast three retentionevery 1st day of month, just after midnight

The snapshots can be found in a special subdirectory of the project repository. Go to

cd $DATA/.snapshots

and list contents

/p/largedata/jsc> ls
daily-20181129
daily-20181130
daily-20181203
weekly-20181118
weekly-20181125
weekly-20181202
monthly-20181001
monthly-20181101
monthly-20181201

In the subdirectory <type>-<YYYYMMDD> the file version which was valid at date DD.MM.YYYY can be retrieved using the same path as the actual file is placed in the $LARGEDATA repository.

Due to the fact that the snapshot is part of the file system, the data restore can be performed on any system where it is mounted.

$ARCHIVE - The Archive data repository

All files within the user's archive directory ($ARCHIVE) for long term storage are automatically backed up by TSM (Tivoli Storage Manager) function. To restore a file, use

adsmback [-type=archive] &

on JUDAC.

This command grants access to the correct backup data of the project's assigned archive directory.

Follow the GUI by selecting:

Restore -> View -> Display active/inactive files
File level -> archX -> group -> ...
Select files or directories to restore
Press [Restore] button

If the data should be restored to original location then choose within the Restore Destination window:

  • Original location

Otherwise select

  • Following location + <path> + Restore complete path
How to modify the users's environment.

When users login on an frontend node using ssh a shell will be started and a couple of environment variables will be set. These are defined in system profiles. Each user can add/modify his environment by using his own profiles in his HOME directory.

In the Jülich setup there will be a separate HOME directory for each HPC system. Which means that the environment differs between JUWELS, JURECA; JUDAC; ... and also the user can modify his own profiles for each system separately. Therefore a skeleton .bash_profile and .bashrc are placed in each $HOME directory when a user is joined to any HPC system.

.bash_profile:
# **************************************************
# bash environment file in $HOME
# Please see:
# http://www.fz-juelich.de/ias/jsc/EN/Expertise/D...
# for more information and possible modifications...
# **************************************************
# Get the aliases and functions: Copied from Cent...
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
export PS1="[\u@\h \W]\$ "

.bashrc:
# **************************************************
# bash environment file in $HOME
# Please see:
# http://www.fz-juelich.de/ias/jsc/EN/Expertise/D...
# for more information and possible modifications...
# **************************************************
# Source global definitions: Copied from CentOS 7...
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi

User HOME directory structureSeparate HOME directory for each HPC system

E.g. on JUDAC user graf1 will see $HOME="/p/home/jusers/graf1/JUDAC". The profiles located here were used for login. Only the shared folder (link) points always to the same directory /p/home/jusers/graf1/shared.

Most side dependend variables are set automatically by the jutil env init command (system profile). The user can set the right core variables ($PROJECT, $ARCHIVE, ...) by using

jutil env activate -p <project>

For more information look at the jutil command usage.

How to see the currently set budget:

If a user has to change his budget account during a login session it might be helpful to see the currently set budget account in his prompt to be sure to work on the correct budget.Therefore one should replace the current "export PS!=..." line in .bash_profile by:

prompt() {
PS1="[${BUDGET_ACCOUNTS:-\u}@\h \W]\$ "
}
PROMPT_COMMAND=prompt

This results in the following behaviour:

[user1@juwels07 ~]$ jutil env activate -p chpsadm
[hpsadm@juwels07 ~]$ jutil env activate -p cslfse
[slfse@juwels07 ~]$

How can I see which data is migrated?

There are two file systems which hold migrated data: /arch and /arch2

  • These are so called archive file systems.
  • In principle all data in the file systems will be migrated to TSM-HSM tape storage in tape libraries.
  • Data is copied to TSM backup storage prior to migration.
  • Every user owns a personal archive directory that can be specified by the $ARCHIVE variable.
  • Data are not quoted by storage but by the number of files per group/project. This is done because UNIX is still not able to handle millions of files in a file system with an acceptable performance.

The TSM-HSM native command dsmls, which shows if a file is migrated, is not available on any HPC system (e.g. JUWELS, JURECA, ...) nor on the Data Access System (JUDAC). This command is only supported on the TSM-HSM node of the JUST storage cluster, that hosts the file systems for the HPC systems. However JUST is not open for user access.

Please use

ls -ls [mask | filename]

to list the files. Migrated files can be identified by a block count of 0 in the first column (-s option) and an arbitrary number of bytes in the sixth column (-l option).

0 -rw-r----- 1 user group 513307 Jan 22 2008 log1
0 -rw-r----- 1 user group 114 Jan 22 2008 log2
0 -rw-r----- 1 user group 273 Jan 22 2008 log3
0 -rw-r----- 1 user group 22893504 Jan 23 2008 log4

How can I recall migrated data?

Normally migrated files are automatically recalled from TSM-HSM tape storage when the file is accessed on the login nodes of the HPC systems (e.g.. JUWELS, JURECA, ...) or the Data Access System (JUDAC).

For an explicit recall the native TSM-HSM command dsmrecall is not available. Please use

tail <filename>
or:
head <filename>

to start the recall process. These commands will not change any file attribute and the migrated version of the file as well as the backup version stay valid.

It is strongly recommended NOT to use

touch <filename>

because this changes the timestamp of the file, so a new backup copy must be created and the file has to be migrated again. These are two additional processes that waste compute ressources, if the file is used read only by further processing.

What data quotas do exist and how to list usage?

For all data repositories the disk quota managing is enabled. The values are set to default values (defined by JSC) or depend on special requirements by the projects.


Default data quota per user/project within GPFS file systems

File System

Disk Space

Number of Files

Soft LimitHard Limit Soft LimitHard Limit
$HOME10 GB11 GB10.00011.000
$SCRATCH90 TB95 TB4 Mio4.4 Mio
$PROJECT16 TB17TB3 Mio3.1 Mio
$FASTDATAdependdependdependdepend
$DATAdependdependdependdepend
$ARCHIVE- (see note)500.000550.000

Note:
No hard disk space limit for $ARCHIVE exists. Furthermore for some projects there may exist special guidelines.

File size limit

Although the file size limit on operation system level e.g. at JUWELS or JURECA is set to unlimited (ulimit -f) the maximum file size can only be the GPFS group quota limit for the corresponding file system. The actual limits can be listed by jutil.

List data quota and usage by project or user

Members of a group/project can display the hard limits, quotas (soft limit) and usage by each user of the project using the jutil command.

jutil project dataquota -p <project name>

The quota information for the users are updated once a day. To get the in time quota usage for $SCRATCH, $PROJECT, $FASTDATA and $DATA you must run the GPFS command

mmlsquota [--block-size {m|g|t|auto}] -j <project> <project | scratch | fastdata | largedata>

For the $ARCHIVE use the option -g <project>

mmlsquota [--block-size {m|g|t|auto}] -g <project> <arch | arch2>

On $HOME each user has its own quota. The command here is

mmlsquota [--block-size {m|g|t|auto}] -u <userid> home

Some notes on the command output:

  • The soft limit is the quota value of the project/user. If the usage exceeds it, the grace period starts (14 days).
  • No user/project can use more than defined as hard limit. Writing/appending will be blocked by the operating system.
  • The column grace reports the status of the quota

    none - no quota exceeded
    xdays - remaining grace period to clean up after the soft limit is exceeded
    expired - no data can be written before cleanup

Recommendation for users with a lot of small files

Users with applications that create a lot of relatively small files should reorganize the data by collecting these files within tar-archives using the

tar -cvf archive-filename ...

command. The problem is really the number of files (inodes) that have to be managed by the underlying operating system and not the space they occupy in total. On the other hand please keep in mind the recommendations under File size limit.

What file system to use for different data?

In multiple GPFS file systems for different types of user data. Each file system has its own data policies.

  • $HOME
    Acts as repository for the user’s personal data like the SSH key. There is a separate HOME folder for each HPC system and a shared folder which pointed on all systems to the same directory. Data within $HOME are backed up by TSM, see also

  • $SCRATCH
    Is bound to a compute project and acts as a temporary storage location with high I/O bandwidth (measured 150 GB/s from an JUWELS application). If the application is able to handle large files and I/O demands, $SCRATCH is the right file system to place them. Data within $SCRATCH is not backed up and daily cleanup is done.

    • Normal files older than 90 days will be purged automatically. In reality modification and access date will be taken into account, but for performance reasons access date is not set automatically by the system but can be set by the user explicitly with
      touch -a <filename>.
      Time stamps that are recorded with files can be easily listed by
      stat <filename>.
    • Empty directories, as they will arise amongst others due to deletion of old files, will be deleted after 3 days. This applies also to trees of empty directories which will be deleted recursively from bottom to top in one step.
  • $PROJECT
    Data repository for a compute project. It's lifetime is bound to the project lifetime. Data are backed up by TSM.

  • $FASTDATA
    Belongs to a data project. This file system is bandwidth optimized (similar to $SCRATCH), but data are persistent backed up by TSM.

  • $DATA
    Belongs to a data project. This file system is designed to store a huge amount of data on disk based storage. The bandwidth is moderate. Backup is realized with the GPFS snapshot feature. For more information, look at

  • $ARCHIVE
    Is bound to a data project and acts as storage for all files not in use for a longer time. Data are migrated to tape storage by TSM-HSM. It is recommended to use tar-files with a maximum size of 1 TB. This is caused by the speed for reading/writing data from/to tape. All data in $ARCHIVE first has to be backed up which will take 10h for 1TB. Next the data will be migrated to tape which will take 3h per 1TB. Please keep in mind that a recall of the data will need approximately the same time. See also

All GPFS file systems are managed by quotas for disk space and/or number of files. See also

How to share files by using ACLs?

ACLs (Access Control Lists) provide a means of specifying access rights on files. GPFS access control lists allow the definition of access rights for other users or groups.

Create or change a GPFS access control list

mmeditacl <filename>

which will open the ACL-definition of <filename> with an editor.


Note that for this command to work the EDITOR environment variable must contain a complete path name, for example on JUDAC: export EDITOR=/usr/bin/vim

Example:
Set read and execute permission for user user1 and execute permission only for user2 to directory dir1:

mmeditacl dir1
.... (append 3 lines to the displayed lines) ....
mask::r-x-
user:user1:r-x-
user:user2:--x-

Note that mask must have the maximum permission compared to any user permission of this ACL and that access must be granted to every directory in the hierarchy (esp. the home directory). The 4th character stands for the GPFS specific control permission.

When the file is saved, the following has to be answered:

mmeditacl: 6027-967 Should the modified ACL be applied? (yes) or (no)

Which files have an access control list?

The command

ls -l

will show a "+" for every file that has ACL set, eg.

drwx------+ 2 user group 32768 Feb 21 09:25 dir1

Delete a GPFS access control list

mmdelacl <filename>

or remove the added lines by mmeditacl.

Apply a GPFS ACL recursively

Example:
Apply ACL to all subsequent files and directories below dir1, use:

for i in `find dir1`
do
mmgetacl dir1 | mmputacl $i
done

Documentation

Please see the man pages or IBM documentation for further commands:

mmdelacl, mmgetacl, mmputacl


Servicemeu

Homepage