Training course "Advanced Parallel Programming with MPI and OpenMP"
(Course no. 91/2016 in the training programme of Forschungszentrum Jülich)
Target audience: | Supercomputer users who want to optimize their programs with MPI or OpenMP and already have experience in parallel programming |
Contents: | |
Prerequisites: | Knowledge in Unix, in either C, C++ or Fortran; familiar with the principles of MPI, i.e., point-to-point message passing, datatypes, nonblocking communication, collective communication; familiar with OpenMP 3.0 |
Agenda: | Agenda of Advanced MPI Course at JSC |
Language: | This course is given in German. |
Duration: | 3 days |
Date: | 28-30 November 2016, 9:00-16:30 |
Venue: | Jülich Supercomputing Centre, Ausbildungsraum 1, building 16.3, room 213a |
Number of participants: | maximum 28 |
Instructors: | Dr. Rolf Rabenseifner, HLRS Stuttgart (for MPI and OpenMP) Dr. Markus Geimer, JSC (for Tools session on the 3rd afternoon) |
Contact: | Dr. Florian Janetzko Phone: +49 2461 61-1446 E-mail: f.janetzko@fz-juelich.de |
Registration: | closed |
The focus is on advanced programming with MPI and OpenMP. The course addresses participants who have already some experience with C/C++ or Fortran and MPI and OpenMP, the most popular programming models in high performance computing (HPC).
The course will teach newest methods in MPI-3.0/3.1 and OpenMP-4.5, which were developed for the efficient use of current HPC hardware. Topics with MPI are the group and communicator concept, process topologies, derived data types, the new MPI-3.0 Fortran language binding, one-sided communication and the new MPI-3.0 shared memory programming model within MPI. Topics with OpenMP are the OpenMP-4.0 extensions, as the vectorization directives, thread affinity and OpenMP places. (The GPU programming with OpenMP-4.0 directives is not part of this course.) The course also contains performance and best practice considerations, e.g., with hybrid MPI+OpenMP parallelisation. The course ends with a section presenting tools for parallel programming.
Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the taught constructs of the Message Passing Interface (MPI) and the shared memory directives of OpenMP. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. It is organized by JSC in collaboration with HLRS. (Content Level: 20% for beginners, 50% intermediate, 30% advanced).
For further information on the course, see web page at HLRS.