PRACE training course "Parallel and Scalable Machine Learning"

Start
15th January 2018 08:00 AM
End
17th January 2018 15:30 PM
Location
Jülich Supercomputing Centre, Rotunda, building 16.4, room 301

(Course no. 1362018 in the training programme 2018 of Forschungszentrum Jülich)

This course is fully booked.

Target audience:

Scientists who want to analyze data with machine learning

Contents:

 

Prerequisites:

Job submissions to large HPC machines using batch scripts, knowledge of mathematical basics in linear algebra helpful.


Participants should bring their own notebooks (with an ssh-client).

Learning outcome:

After this course participants will have a general understanding how to approach data analysis problems in a systematic way. In particular this course will provide insights into key benefits of parallelization such as during the n-fold cross-validation process where significant speed-ups can be obtained compared to serial methods. Participants will also get a detailed understanding why and how parallelization provides benefits to a scalable data analyzing process using machine learning methods for big data and a general understanding for which problems deep learning algorithms are useful and how parallel and scalable computing is facilitating the learning process when facing big datasets. Participants will learn that deep learning can actually perform ‘feature learning’ that bears the potential to significantly speed-up data analysis processes that previously required much feature engineering.

Language:

This course is given in English.

Duration:

3 days

Date:

15-17 January 2018, 9:00-16:30

Venue:

Jülich Supercomputing Centre, Rotunda, building 16.4, room 301

Number of participants:

maximum 40

Instructors:

Prof. Morris Riedel, JSC

Contact:

Morris Riedel


Phone: +49 2461 61-3651


E-mail: m.riedel@fz-juelich.de

Registration:

closed. This course is fully booked.

The course offers basics of analyzing data with machine learning and data mining algorithms in order to understand foundations of learning from large quantities of data. This course is especially oriented towards beginners that have no previous knowledge of machine learning techniques. The course consists of general methods for data analysis in order to understand clustering, classification, and regression. This includes a thorough discussion of test datasets, training datasets, and validation datasets required to learn from data with a high accuracy. Easy application examples will foster the theoretical course elements that also will illustrate problems like overfitting followed by mechanisms such as validation and regularization that prevent such problems.

The tutorial will start from a very simple application example in order to teach foundations like the role of features in data, linear separability, or decision boundaries for machine learning models. In particular this course will point to key challenges in analyzing large quantities of data sets (aka ‘big data’) in order to motivate the use of parallel and scalable machine learning algorithms that will be used in the course. The course targets specific challenges in analyzing large quantities of datasets that cannot be analyzed with traditional serial methods provided by tools such as R, SAS, or Matlab. This includes several challenges as part of the machine learning algorithms, the distribution of data, or the process of performing validation. The course will introduce selected solutions to overcome these challenges using parallel and scalable computing techniques based on the Message Passing Interface (MPI) and OpenMP that run on massively parallel High Performance Computing (HPC) platforms. The course ends with a more recent machine learning method known as deep learning that emerged as a promising disruptive approach, allowing knowledge discovery from large datasets in an unprecedented effectiveness and efficiency.

This course is a PRACE training course.

Last Modified: 20.05.2022