Search

link to homepage

Institute for Advanced Simulation (IAS)

Navigation and service


General FAQs about JUQUEEN

Error messages on JUQUEEN

Data questions

General FAQs about JUQUEEN

Why should jobs write regular checkpoints?

The enhanced complexity of the new-generation supercomputers at JSC increases the probability that a job might be affected by a failure. Therefore, we strongly encourage all users of these systems to write regular checkpoints from their applications to avoid losses of CPU time when a job is aborted. There will be no refund of CPU time in the case of a failed job!

Tip: Besides checkpointing, jobs with a time limit of less than the maximum allowed hours might have a better turnaround time on JUQUEEN because they can be used to optimally fill the machine while it is being prepared for regular maintenance slots or full machine runs.

How can a soft limit for the wall clock time be used?

At the moment there is no way to use the soft limit.
The signal, which is send by LoadLeveler on the front-end node is not routed to the BlueGene application.

The only, whereas not fully adequate, alternative is to check the user time from within the application and estimate manually the remaining time.

How can I link Fortran subroutines into my C program?

To link Fortran subroutines either from libraries like essl or lapack or as parts of your own code into a C main program you have to link the following additional libraries after the Fortran routines and all Fortran libraries in your link statement:

-L${XLFLIB_FZJ} -lxl -lxlopt -lxlf90_r -lxlfmath \
-L${XLSMPLIB_FZJ} -lxlomp_ser -lpthread

FZJ has introduced the environment variables XLFLIB_FZJ and XLSMPLIB_FZJ as pointers to the recent compiler version, so that makefiles can be kept independent of the compiler changes.

What are the conditions for temporary data?

For temporary user data it is recommeded to use $WORK instead of /tmp, because $WORK is a lot bigger. (Data on $WORK will be held for 90 days and is not backed up.)

BlueGene applications are not able to access /tmp. Jobs trying this will be terminated.
If a XL Fortran application program creates a temporary file at run time with STATUS='SCRATCH', by default, these files are placed in the directory /tmp. To avoid this redefine $TMPDIR:
runjob --exe <myprog> --envs TMPDIR=$WORK/<dir>

Also do not use /tmp on the Front-End-Nodes!
/tmp is very small and data will be held only for 7 days.

How do I check how much memory my application is using?

Integrating the routine below will allow you to track the memory usage of your application:

#include <stdio.h>
#include <stdlib.h>

#include <spi/include/kernel/memory.h>

void print_memusage()
{
uint64_t shared, persist, heapavail, stackavail, stack, heap, guard, mmap;

Kernel_GetMemorySize(KERNEL_MEMSIZE_GUARD, &guard);
Kernel_GetMemorySize(KERNEL_MEMSIZE_SHARED, &shared);
Kernel_GetMemorySize(KERNEL_MEMSIZE_PERSIST, &persist);
Kernel_GetMemorySize(KERNEL_MEMSIZE_HEAPAVAIL, &heapavail);
Kernel_GetMemorySize(KERNEL_MEMSIZE_STACKAVAIL, &stackavail);
Kernel_GetMemorySize(KERNEL_MEMSIZE_STACK, &stack);
Kernel_GetMemorySize(KERNEL_MEMSIZE_HEAP, &heap);
Kernel_GetMemorySize(KERNEL_MEMSIZE_MMAP, &mmap);
#if 0
printf("Allocated heap: %.2f MB, avail. heap: %.2f MB\n", double(heap)/(1024*1024), double(heapavail)/(1024*1024));
printf("Allocated stack: %.2f MB, avail. stack: %.2f MB\n", double(stack)/(1024*1024), double(stackavail)/(1024*1024));
printf("Memory: shared: %.2f MB, persist: %.2f MB, guard: %.2f MB, mmap: %.2f MB\n", double(shared)/(1024*1024), double(persist)/(1024*1024), double(guard)/(1024*1024), double(mmap)/(1024*1024));
#else
printf("MEMSIZE heap: %.2f/%.2f stack: %.2f/%.2f mmap: %.2f MB\n", (double)heap/(1024*1024), (double)heapavail/(1024*1024), (double)stack/(1024*1024), (double)stackavail/(1024*1024), (double)mmap/(1024*1024));
printf("MEMSIZE shared: %.2f persist: %.2f guard: %.2f MB\n", (double)shared/(1024*1024), (double)persist/(1024*1024), (double)guard/(1024*1024));
#endif
}

How can core dumps be disabled or limited?

Core dumps are enabled on JUQUEEN by default.

Due to the fact that writing core files from thousands of nodes takes (too) much time, the generating of core files may be suppressed or limited.

How to disable core files?

The BG_COREDUMPDISABLED environment variable must be set to 1 and exported to the runjob environment:

export BG_COREDUMPDISABLED=1

runjob --exe <filename> --exp-env BG_COREDUMPDISABLED

How to limit the number of core files?

The BG_COREDUMPDISABLED environment variable must be set to the number of requested core files and exported to the runjob environment:

export BG_COREDUMPMAXNODES=<n>

runjob --exe <filename> --exp-env BG_COREDUMPMAXNODES

How to read core files?

Core files are plain text files that include traceback information in hexadecimal.

To read and convert the hexadecimal addresses the tool addr2line may help.
For more information use

addr2line -h

(Compilation should have included option -g)

How to generate and upload ssh keys?

In order to access the JSC computer systems you need to generate an ssh key pair. This pair consists of a public and a private part. Here we briefly describe how to generate and upload such a pair.

On Linux/UNIX

In order to create a new ssh key pair login to your local machine from where you want to connect to the JSC computer systems. Open a shell and use the following command

ssh-keygen -b 2048 -t rsa

You are asked for a file name and location where the key should be saved. Unless you really know what you are doing, please simply take the default by hitting the enter key. This will generate the ssh key in the .ssh directory of your home directory ($HOME/.ssh).
Next, you are asked for a passphrase. Please, choose a secure passphrase. It should be at least 8 characters long and should contain numbers, letters and special characters like !@#$%^&*().

Important: You are NOT allowed to leave the passphrase empty!

You will be asked to upload the public part of your key ($HOME/.ssh/id_rsa.pub) on the JSC web site when you apply for an account. You must keep the private part ($HOME/.ssh/id_rsa) confidential.

Important: Do NOT remove it from this location and do NOT rename it!

You will be notified by email once your account is created and your public key is installed. To login, please use

ssh <yourid>@<machine>.fz-juelich.de

where 'yourid' is your user id on the JSC system 'machine' (i.e. you have to replace 'machine' by the corresponding JSC system). You will be prompted for your passphrase of the ssh key which is the one you entered when you generated the key (see above).

On Windows

You can generate the key pair using for example the PuTTYgen tool, which is provided by the PuTTy project. Start PuTTYgen and choose SSH-2 RSA at the bottom of the window and set the 'number of bits in the generated key' to 2048 and press the 'Generate' button.

PuTTYgen will prompt you to generate some randomness by moving the mouse over the blank area. Once this is done, a new public key will be displayed at the top of the window.

Enter a secure passphrase. It should be at least 8 characters long and should contain numbers, letters and special characters like !@#$%^&*().

Important: You are NOT allowed to leave the passphrase empty!

Save the public and the private key. We recommend to use 'id_rsa.pub' for the public and 'id_rsa' for the private part.

You will be asked to upload the public part of your key (id_rsa.pub) on a JSC web site when you apply for an account. You must keep the private part (id_rsa) confidential.

You will be notified by email once your account is created and your public key is installed. To login, please use an ssh client for Windows, use authentication method 'public-key', import the key pair you have generated above and login to the corresponding JSC system with your user id. If you are using the PuTTy client you can import the key in the configuration category 'Connection', subcategory 'ssh' -> Auth. Once this is done you will be prompted for your passphrase of the ssh-key which is the one you entered when you generated the key (see above).

Adding additional keys

If you would like to connect to your account from more than one computer, you can create and use additionals pairs of public and private keys:

After creating a pair of public/private keys there are two ways of installing the public key on the target machine:

Method 1 (Linux/Mac):

Use the ssh-copy-id command to simultaneously upload and add the public key file 'public_key.pub' to the account 'user' on the target machine 'target':

ssh-copy-id -i public_key.pub user@targetmachine

Please refer to the man-page of ssh-copy-id for further information.

Method 2 (all operating systems):

ii) upload the public key file to your account at the HPC-target system

ii-a) In case the public key was created under Windows (e.g. in Putty) it has to be converted. This is done on the target HPC-system by the command

ssh-keygen -i -f original_public_key_file.pub > new_public_key_file.pub

iii) open the (new) keyfile and copy the whole line

iv) append the line as a new line to the file ~/.ssh/authorized_keys

v) Make sure the private key sits in the correct place on your private computer.

Replace SSH Key

In case the ssh key has to be replaced, use the following link: Upload of ssh-key

Note: This will replace ALL public keys by the new public key. If you use more than one key pair you will have to add your additional public keys as described above.

Error messages on JUQUEEN

How can I omit the emergency warning when starting emacs?

When starting emacs on JUQUEEN the following warning message may appear:

Emergency (alloc): Warning: past 95% of memory limit

This warning can be ignored. To avoid this notification, you need to include the following line in the file $HOME/.emacs (you need to create this file if it does not exist):

(setq warning-suppress-types '((alloc)))

Please include all parentheses in this line.

What does the error message "Load failed on Rxx-xx-xxx: Generating static TLB map for application failed, errno 0 " mean?

An application was loaded and the CNK (compute node kernel) was not able to generate a physical map for it. Usual reason is that the application is too big for 16GB of memory.

What do you get when you run size <executable name>?

If you take the data segment and multiply it by the number of processes per node, does it exceed (or get close to) 16GB?

If this is the case then you will need to reduce the executable size. Maybe there are some modules in your code that are not required and do not need to be linked to the executable? Another possibility would be to decrease the number of ranks per node to have more memory available.

Data questions

What file system to use for different data?

In principle there are three GPFS file systems for different types of user data. Each file system has its own data policies.

  • $HOME
    Acts as repository for source code, binaries, libraries and applications with small size and I/O demands. Data within $HOME are backed up by TSM, see also

  • $WORK
    Acts as a temporary storage location with high I/O bandwidth (measured 160 GB/s from an JUQUEEN application). If the application is able to handle large files and I/O demands, $WORK is the right file system to place them. Data within $WORK is not backed up and daily cleanup is done.

    • Normal files older than 90 days will be purged automatically. In reality modification and access date will be taken into account, but for performance reasons access date is not set automatically by the system but can be set by the user explicitly with
      touch -a <filename>.
      Time stamps that are recorded with files can be ealily listed by
      stat <filename>.
    • Empty directories, as they will arise amongst others due to deletion of old files, will be deleted after 3 days. This applies also to trees of empty directories which will be deleted recursively from bottom to top in one step.
  • $ARCH
    Acts as storage for all files not in use for a longer time. Data are migrated to tape storage by TSM-HSM. It is recommended to use tar-files with a maximum size of 1 TB. This is caused by the speed for reading/writing data from/to tape. All data in $ARCH first has to be backed up which will take 10h for 1TB. Next the data will be migrated to tape which will take 3h per 1TB. Please keep in mind that a recall of the data will need approximately the same time. See also

All GPFS file systems are managed by disk space and/or number of files quotas, see also

What data quotas do exist and how to list usage?

Disk quota limitations in $HOME and $WORK file systems are in effect since end of October 2007. This had to be done because in the past file systems were blocked by creating millions of files by single users which caused performance for system commands (ls, du) to be degraded. Also migration for $HOME data didn't work successfully any longer and therefore the new type of archive file system $ARCH was introduced. The following limitations apply since December 2009 in general. The numbers are updated according to the actual capacity in the file systems.


Data quota per group/project within GPFS file systems

File System

Disk Space

Number of Files

Soft LimitHard Limit Soft LimitHard Limit
$HOME6 TB7 TB2 Mio2.2 Mio
$WORK20 TB21 TB4 Mio4.4 Mio
$ARCH- (see note)2 Mio2.2 Mio

Note:
No hard disk space limit for $ARCH exists but if more than 100 TB will be requested please contact the supercomputing support at JSC ( sc@fz-juelich.de ) to discuss optimal data processing particularly with regard to the end of the project. Furthermore for some projects there may exist special guidelines.

File size limit

Although the file size limit on operation system level ( Linux for JUDGE and JUQUEEN) is set to unlimited (ulimit -f) the maximum file size can only be the GPFS group quota limit for the correspondig file system. The actual limits can be listed by q_dataquota.

List data quota and usage by group and user

Members of a group/project can display the hard limits, quotas (soft limit) and usage by each user of the group in a group special file (/homex/group/usage.quota) that is updated every three hours within prime shift (see timestamp at the top of the file). Since End of January 2013 for easy reading the unit of measure is set to GB instead of KB. This causes that the displayed values are always rounded up to the next GB-value. If less then 1 GB are used e.g. 256 KB or 128 MB there will be always 1 GB to be seen.

more $HOME/../usage.quota

This file can also be listed in a short and long format by the command

q_dataquota [-l]

The short format will display the group quota limits and group data usage for each file system followed by the usage of the user herself/himself. The long listing includes the data usage of all users of the group in descending order.

Notes:

  • Although no quota limits for a group may be listed for the $WORK file system quotas are set! Counting quotas will start with the first file created by a user of the group.
  • If the message Cannot exceed the user or group quota is displayed when writing data to a file the sum of used and in_doubt blocks has exceeded the hard limit. Please be aware of that not only the used blocks are taken into account!
  • The column grace reports the status of the quota

    none - no quota exceeded
    xdays - remaining grace period to clean up after the soft limit is exceeded
    expired - no data can be written before cleanup

List in time data quota and usage by group

A prompt update of the group's data usage and limits can be displayed with:

mmlsquota -g <group> [ <FS_without_leading_/> | -C just.fz-juelich.de ]

The output for the specified file system or all file systems of the JUST storage cluster will show the usage summary of the specified group (not the members) in KByte units by default. For better reading a unit of measure can be specified or GPFS can select the best that fits. To do so specify the option (with GPFS 3.5.x)

--block-size {M|G|T|auto}


System actions when limits are exceeded

  • Soft limit
    If any soft limit is exceeded a grace period of 14 days starts to count down. If no data will be deleted to be under the limit the quota will be expired after the grace period and no files can be created or expanded any longer. If in the meantime the hard limit is exceeded the quota is expired directly.
  • Hard limit
    If any hard limit is exceeded (sum of used and in_doubt are taken into account) the users in the group cannot create any new files or expand existing files in the correponding file system until the number of files or disk space allocated is less than the limit.

Recommendation for users with a lot of small files

Users with applications that create a lot of relatively small files should reorganize the data by collecting these files within tar-archives using the

tar -cvf archive-filename ...

command. The problem is really the number of files (inodes) that have to be managed by the underlaying operating system and not the space they occupy in total. On the other hand please keep in mind the recomendations under File size limit.

How can I recall migrated data?

Normally migrated files are automatically recalled from TSM-HSM tape storage when the file is accessed at JUQUEEN (login nodes only), JUDGE (login and compute nodes), or JUROPA (GPFS gateway nodes only).

For an explicit recall the native TSM-HSM command dsmrecall is not available. Please use

tail <filename>
or:
head <filename>

to start the recall process. These commands will not change any file attribute and the migrated version of the file stays valid.

It is strongly recommended not to use

touch <filename>

because this changes the timestamp of the file, so a new backup copy must be created and the file has to be migrated again. These are two additional processes that waste compute ressources if the file is used read only by further processing.

How can I see which data is migrated?

There are three file systems that hold migrated data: /arch, /arch1, /arch2

  • These are so called archive file systems.
  • In principle all data in the file systems will be migrated to TSM HSM tape storage in tape libraries.
  • Data is copied to TSM backup storage prior to migration.
  • Every user owns a personal archive directory that can be specified by the $ARCH resp. $GPFSARCH variable.
  • Data are not quoted by storage but by the number of files per group/project. This is done because UNIX is still not able to handle millions of files in a file system with an acceptable performance.

The TSM-HSM native command dsmls, which shows if a file is migrated, is not available on JUQUEEN nor on JUDGE nor on JUROPA. This command could only run on JUST, the storage cluster, that hosts the file systems for the HPC systems. However JUST is not open for user access.

Please use

ls -ls [mask | filename]

to list the files. Migrated files can be identified by a block count of 0 in the first column (-s option) and an arbitrary number of bytes in the sixth column (-l option).

0 -rw-r----- 1 user group 513307 Jan 22 2008 log1
0 -rw-r----- 1 user group 114 Jan 22 2008 log2
0 -rw-r----- 1 user group 273 Jan 22 2008 log3
0 -rw-r----- 1 user group 22893504 Jan 23 2008 log4

How to restore a file from the home directory?

All files within the users home directories ($HOME) are automatically backed up by TSM (Tivoli Storage Manager) function. To restore a file, use

adsmback [-type={home | gpfshome} ] &

on the login-nodes of JUQUEEN or JUDGE, or the GPFS-gateway nodes at JUROPA. If the option -type is not specified, the user will be prompted for the type of filesystem

Which type of filesystem should be restored? Enter: {home | arch | gpfshome}

This command grants access to the correct backup data of the user's assigned home directory. 'gpfshome' applies to JUROPA only because JUROPA users have an additional GPFS home directory besides the standard Lustre home directory.

Follow the GUI by selecting:

File level -> [j]homeX -> group -> userid -> ...
Select files or directories to restore
Press [Restore] buttom

If the data should be restored to original location then choose within the Restore Destination window:

  • for JUQUEEN: Original location
  • for JUDGE: Original location
  • for JUROPA (GPFS): Following location + /gpfs/homeX + Restore complete path

Don't use the native dsmj-command which will not show any home data.

How to restore a file from the archive directory?

All files within the user's archive directory ($ARCH) for long term storage are automatically backed up by TSM (Tivoli Storage Manager) function. To restore a file, use

adsmback [-type=arch] &

on the login-nodes of JUQUEEN or JUDGE, or the GPFS gateway-nodes at JUROPA. If the option -type is not specified, the user will be prompted for the type of filesystem

Which type of filesystem should be restored? Enter: {home | arch | gpfshome}

This command grants access to the correct backup data of the user's assigned archive directory.

Follow the GUI by selecting:

File level -> archX -> group -> userid -> ...
Select files or directories to restore
Press [Restore] buttom


If the data should be restored to original location then choose within the Restore Destination window:

  • for JUQUEEN: Original location
  • for JUDGE: Original location
  • for JUROPA: Following location + /gpfs/archX + Restore complete path

Don't use the native dsmj-command which will not show any archive data

How to share files by using ACLs?

ACLs (Access Control Lists) provide a means of specifying access rights on files. GPFS access control lists allow the definition of access rights for other users or groups.

Create or change a GPFS access control list

mmeditacl <filename>

which will open the ACL-definition of <filename> with an editor.


Note that for this command to work the EDITOR environment variable must contain a complete path name, for example on JUQUEEN: export EDITOR=/usr/bin/vim

Example:
Set read and execute permission for user user1 and execute permission only for user2 to directory dir1:

mmeditacl dir1
.... (append 3 lines to the displayed lines) ....
mask::r-x-
user:user1:r-x-
user:user2:--x-

Note that mask must have the maximum permission compared to any user permission of this ACL and that access must be granted to every directory in the hierarchy (esp. the home directory). The 4th character stands for the GPFS specific control permission.

When the file is saved, the following has to be answered:

mmeditacl: 6027-967 Should the modified ACL be applied? (yes) or (no)

Which files have an access control list?

The command

ls -l

will show a "+" for every file that has ACL set, eg.

drwx------+ 2 user group 32768 Feb 21 09:25 dir1

Delete a GPFS access control list

mmdelacl <filename>

or remove the added lines by mmeditacl.

Apply a GPFS ACL recursively

Example:
Apply ACL to all subsequent files and directories below dir1, use:

for i in `find dir1`
do
mmgetacl dir1 | mmputacl $i
done

Documentation

Please see the man pages or IBM documentation for further commands:

mmdelacl, mmgetacl, mmputacl


Servicemeu

Homepage