Training course "Advanced Parallel Programming with MPI and OpenMP"

Start
27th November 2017 08:00 AM
End
29th November 2017 03:30 PM
Location
Jülich Supercomputing Centre, Ausbildungsraum 1, building 16.3, room 213a

(Course no. 98/2017 in the training programme of Forschungszentrum Jülich)

Target audience:

Supercomputer users who want to optimize their programs with MPI or OpenMP and already have experience in parallel programming

Contents:

 

Prerequisites:

Knowledge in Unix, in either C, C++ or Fortran; familiar with the principles of MPI at least to the extend of the

introdctory course MPI and OpenMP

, i.e., point-to-point message passing, datatypes, nonblocking communication, collective communication; familiar with OpenMP 3.0

Agenda:

Agenda of Advanced MPI Course at JSC

Language:

This course is given in English.

Duration:

3 days

Date:

27-29 November 2017, 09:00-16:30

Venue:

Jülich Supercomputing Centre, Ausbildungsraum 1, building 16.3, room 213a

Number of participants:

maximum 28

Instructors:

Dr. Rolf Rabenseifner, HLRS Stuttgart (for MPI and OpenMP)


Dr. Markus Geimer, Michael Knobloch, JSC (for Tools session on the 3rd day)

Contact:

Thomas Breuer


Phone: +49 2461 61-96742


E-mail: t.breuer@fz-juelich.de

Registration:

Please use the

registration form at HLRS

.


Deadline: 29 October 2017

Course materials

 

The focus is on advanced programming with MPI and OpenMP. The course addresses participants who have already some experience with C/C++ or Fortran and MPI and OpenMP, the most popular programming models in high performance computing (HPC).

The course will teach newest methods in MPI-3.0/3.1 and OpenMP-4.5, which were developed for the efficient use of current HPC hardware. Topics with MPI are the group and communicator concept, process topologies, derived data types, the new MPI-3.0 Fortran language binding, one-sided communication and the new MPI-3.0 shared memory programming model within MPI. Topics with OpenMP are the OpenMP-4.0 extensions, as the vectorization directives, thread affinity and OpenMP places. (The GPU programming with OpenMP-4.0 directives is not part of this course.) The course also contains performance and best practice considerations, e.g., with hybrid MPI+OpenMP parallelisation. The course ends with a section presenting tools for parallel programming.

Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the taught constructs of the Message Passing Interface (MPI) and the shared memory directives of OpenMP. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. It is organized by JSC in collaboration with HLRS. (Content Level: 20% for beginners, 50% intermediate, 30% advanced).

see also announcement of this course at HLRS

Last Modified: 21.05.2022