Training course "Advanced Parallel Programming with MPI and OpenMP"

Start
26th November 2018 08:00 AM
End
28th November 2018 15:30 PM
Location
Jülich Supercomputing Centre, Ausbildungsraum 1, building 16.3, room 213a

(Course no. 1192018 in the training programme 2018 of Forschungszentrum Jülich)

Target audience:

Supercomputer users who want to optimize their programs with MPI or OpenMP and already have experience in parallel programming

Contents:

 

Prerequisites:

Knowledge in Unix, in either C, C++ or Fortran; familiar with the principles of MPI at least to the extent of the

introductory course MPI and OpenMP

, i.e., point-to-point message passing, datatypes, nonblocking communication, collective communication; familiar with OpenMP 3.0

Agenda:

Agenda of Advanced MPI Course at JSC

Language:

This course is given in English.

Duration:

3 days

Date:

26-28 November 2018,


on 26-27 November from 09:00-18:30


on 28 November from 09:00-16:30

Venue:

Jülich Supercomputing Centre, Ausbildungsraum 1, building 16.3, room 213a

Number of participants:

maximum 28

Instructors:

Dr. Rolf Rabenseifner, HLRS Stuttgart (for MPI and OpenMP)


JSC staff members (for Tools session on the 3rd afternoon)

Contact:

Benedikt Steinbusch


Phone: +49 2461 61-2523


E-mail: b.steinbusch@fz-juelich.de

Registration:

Please use the

registration form at HLRS

.


Deadline: 11 November 2018

The focus is on advanced programming with MPI and OpenMP. The course addresses participants who have already some experience with C/C++ or Fortran and MPI and OpenMP, the most popular programming models in high performance computing (HPC).

The course will teach newest methods in MPI-3.0/3.1 and OpenMP-4.5, which were developed for the efficient use of current HPC hardware. Topics with MPI are the group and communicator concept, process topologies, derived data types, the new MPI-3.0 Fortran language binding, one-sided communication and the new MPI-3.0 shared memory programming model within MPI. Topics with OpenMP are the OpenMP-4.0 extensions, as the vectorization directives, thread affinity and OpenMP places. (The GPU programming with OpenMP-4.0 directives is not part of this course.) The course also contains performance and best practice considerations, e.g., with hybrid MPI+OpenMP parallelisation. The course ends with a section presenting tools for parallel programming.

Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the taught constructs of the Message Passing Interface (MPI) and the shared memory directives of OpenMP. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. It is organized by JSC in collaboration with HLRS. (Content Level: 20% for beginners, 50% intermediate, 30% advanced).

Last Modified: 20.05.2022