link to homepage

Institute for Advanced Simulation (IAS)

Navigation and service

Tutorial "Efficient Parallel Programming with GASPI"

19 May 2014 13:30
19 May 2014 17:00
Jülich Supercomputing Centre, Ausbildungsraum 2, building 16.3, room 004

In this tutorial we present an asynchronous dataflow programming model for Partitioned Global Address Spaces (PGAS) as an alternative to the programming model of MPI. GASPI, which stands for Global Address Space Programming Interface, is a partitioned global address space (PGAS) API. The GASPI API is designed as a C/C++/Fortran library and focused on three key objectives: scalability, flexibility and fault tolerance. In order to achieve its much improved scaling behaviour GASPI aims at asynchronous dataflow with remote completion, rather than bulk-synchronous message exchanges. GASPI follows a single/multiple program multiple data (SPMD/MPMD) approach and offers a small, yet powerful API (see also and GASPI is successfully used in academic and industrial simulation
applications. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of GASPI. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

13:15-13:30 Registration
13:30-14:15 General introduction to GASPI
14:15-14:30 One sided communication in GASPI
14:30-14:45 Coffee Break
14:45-15:00 Memory segments in GASPI
15:00-16:30 Data Flow in GASPI
16:30-16:45 Collectives and Passive Communication
16:45-17:00 Questions and Answers

This course is given in English.
19 May 2014, 13:30-17:00
Jülich Supercomputing Centre, Ausbildungsraum 2, building 16.3, room 004
Number of participants:
maximum 24
Dr. Christian Simmendinger, T-Systems Solutions for Research GmbH,
Dr. Mirko Rahn and Dr. Daniel Gruenewald, Fraunhofer ITWM
Please register with Rene Halver, Tel. +49 2461 61 6424,
Announcement as pdf file:
 Tutorial "Efficient Parallel Programming with GASPI" (PDF, 32 kB)