This lecture will be held in English.
Lecture in TUM Online: https://campus.tum.de/tumonline/WBMODHB.wbShowMHBReadOnly?pKnotenNr=457062
Tutorials in TUM Online: https://campus.tum.de/tumonline/lv.listEqualLectures?pStpSpNr=950462048&pSpracheNr=1
Moodle Course: https://www.moodle.tum.de/course/view.php?id=75077 (you can self-enroll here to gain access to the course material)
All material accessible from our moodle course. See the links above, you can self-enroll yourself to get access.
Participants need to have experience with working on Linux or Linux-like systems as well as experience in programming C/C++.
The course starts with a motivation for parallel programming and a classification of parallel architectures. It focuses first on parallelization for distributed memory architectures with MPI. It introduces the major concepts, e.g., point to point communication, collective operations, communicators, virutal topologies, non-blocking communication, single-sided communication and parallel IO. In addition, it covers the overall parallelization approach based on four phases, i.e., decomposition, assignment, orchestration and mapping. The next section presents dependence analysis as the major theoretical basis for parallelization. It introduces program transformations and discusses their profitability and safety based on data dependence analysis. The second major programming interface in the course is OpenMP for shared memory systems. This section covers most of the language concepts as well as proposed extensions. In the last part, the lecture presents novel programming interfaces, such as PGAS languages, threading building blocks, CUDA, OpenCL, and OpenACC.
For more information, please see TUMOnline.