Search

link to homepage

Institute for Advanced Simulation (IAS)

Navigation and service


Using Scalasca on JUGENE

Access

Scalasca versions are installed as UNITE modules on the IBM Blue Gene/P system JUGENE (in /usr/local/UNITE/modulefiles):

% module load UNITE
UNITE loaded
% module avail scalasca
------------ /usr/local/UNITE/modulefiles/tools ------------
scalasca/1.0     scalasca/1.1    scalasca/1.2    scalasca/1.3.0(default)
% module whatis scalasca
----------- /usr/local/UNITE/modulefiles/tools -------------
scalasca/1.3.0: Scalable automatic performance analysis toolset
% module help scalasca
-------------------------------------------------------------------
Module Specific Help for /usr/local/UNITE/modulefiles/tools/scalasca/1.3.0:

Scalasca 1.3.0
Scalable automatic performance analysis of large-scale applications

Usage:
Run "scalasca" with no arguments for a brief usage message.
Run "scalasca" with action arguments in sequence:
1. prepare application objects and executable for measurement:
   scalasca -instrument <compile-or-link-command> # skin
2. run application under control of measurement system:
   scalasca -analyze <application-launch-command> # scan
3. interactively explore measurement analysis report:
   scalasca -examine <experiment-archive|report> # square

For more information:
- See $SCALASCA_ROOT/doc/manuals/UserGuide.pdf
  and $SCALASCA_ROOT/doc/manuals/QuickReference.pdf (or run "scalasca -h")
- http://www.scalasca.org/
- mailto:scalasca@fz-juelich.de
-------------------------------------------------------------------
% module load scalasca
scalasca/1.3.0 loaded
% module list
Currently Loaded Modulefiles:
1) UNITE/1.1     2) scalasca/1.3.0

Versions

Scalasca/1.0 is the first release that fully incorporates KOJAK 3.0, including support for MPI, OpenMP and hybrid OpenMP/MPI application execution analyses. Runtime summaries are produced by default, and when requested traces are collected and analyzed in parallel: sequential trace analysis can also be performed when desired. Scalasca/1.1 provides a more powerful Qt-based GUI for examining analysis reports. Scalasca/1.2 offers better scalability and MPI File I/O metrics, more readable Users Guide, and various other bug-fixes and improvements. Scalasca/1.3 improves measurement support, including configurable MPI event selectivity and RMA analysis, and the ability to use PDToolkit for selective instrumentation of user-source routines. Additional versions may become available in future.

Each version includes the Scalasca/KOJAK instrumenters (including OPARI source procecessor), EPIK measurement system, EXPERT & SCOUT trace analyzers and CUBE3 analysis browser (and associated tools). (Don't mix the use of components from different Scalasca and KOJAK versions, as file formats may be incompatible.)

Each Scalasca installation is configured with IBM BlueGene MPI and XL cross-compilers for the BG/P compute nodes. GNU compiler collection compilers are not explicitly supported by this installation. When preparing your application with the Scalasca instrumenter, the compilers' flags should be properly recognised, and automatic user function instrumentation can be enabled.

PAPI is configured so that hardware counter metric measurements can be included in experiments, however, such metrics are ignored by the SCOUT analyzer. (MPI2 RMA communication and OpenMP trace analysis is not supported by SCOUT prior to Scalasca/1.3, requiring use of the sequential EXPERT trace analyzer after merging the tracefiles.)

Issues

Support is only provided for measuring and analyzing applications running on the compute nodes launched by mpirun and llrun: instrumentation and analysis browsing are (only) supported on the front-end nodes. Analysis reports (.cube files) can also be extracted from experiment archives and browsed on any system where CUBE3 is installed (e.g., JUROPA, JUMP or a workstation/notebook computer).

A number of variables can be used to control the EPIK measurement runtime configuration of an instrumented executable: for an annotated list of EPIK configuration variables, and their current settings, run the epik_conf command. Variables can be specified via environment variables or in a configuration file called "EPIK.CONF": by default the current directory is searched for this file, or an alternative location can be specified with the EPK_CONF environment variable.

For (large-scale) MPI applications ensure that the EPK_LDIR and EPK_GDIR variables are set to the same location, as this avoids intermediate file writing and can greatly improve performance. (By default both are set to the current working directory ".")

Due to the restricted memory on BG/P compute nodes, care is required when selecting an appropriate buffer size for traces. The default trace buffer size (ELG_BUFFER_SIZE) for each process is rather small and typically only adequate for short traces. It is therefore recommended to set the trace buffer size as large as available memory permits: if too large a size is specified, the application will be unable to run or fail to acquire memory. Trace buffer requirements can be estimated by running square -s on an experiment archive (or the cube3_score utility on a corresponding summary.cube report).

When using automatic user function instrumentation, EPK_FILTER can be used to specify a file containing the names of functions (one per line) to be excluded from measurement collection, thereby reducing measurement perturbation and trace size. Although convenient for experimentation, the remaining instrumentation handling overhead can still be significant and in such cases it is preferable to compile such functions/modules separately without compiler instrumentation.

The current version of PAPI provided by IBM for accessing BG/P hardware counters sometimes fails to reset some or more counters, resulting in erroneous values. The value reported for total cycles (PAPI_TOT_CYC) always seems to be zero.

Measurement experiments containing traces that are too large to analyse in virtual node (VN) mode may be analyzable with the additional memory available in DUAL or SMP mode. (Experiment archives are portable to other systems where sufficient processors with additional memory are available and a compatible version of Scalasca is installed, however, the size of experiment archives typically prohibits this.)

Commands, library interfaces and file formats are subject to change (unstable).

Reference

http://www.scalasca.org/ – Scalasca project website, including overview of toolset architecture, presentations and publications, and software download.

http://www2.fz-juelich.de/zam/kojak/ – KOJAK project website, including overview of toolset architecture, presentations and publications, and software download.

$SCALASCA_ROOT/doc – Directory containing documentation for the Scalasca installation, including quick reference guide, USAGE notes (covering application program instrumentation, runtime measurement control, and automatic and manual analyses), OPEN_ISSUES documenting open issues, known limitations and unimplemented functionality, performance property patterns used by the SCOUT automatic trace analyzer, and CUBE3 user manual for the analysis report browser.

$SCALASCA_ROOT/example – Directory containing example Fortran and C programs with different forms of instrumentation, and an example experiment archive.

scalasca@scalasca.org – Mailing list for Scalasca comments, questions and bug reports.


Servicemeu

Homepage