Lecturer(s)
|
-
Předota Milan, doc. RNDr. Ph.D.
-
Musil Patrik, Mgr.
|
Course content
|
Content of lectures: 1. Introduction to parallel computing. What is parallel computing? Why do we need parallel computers? Parallelization strategies. Acquaintance with a simple parallel program. 2.-3. MPI Parallelization, basic commands (MPI_Bcast, MPI_Reduce, MPI_Allreduce). Different ways of parallelization of cycles. 4. Communication of arrays (MPI_Scatter, MPI_Gather). 5. Commands of the C language for working with files, input/output in parallel codes, comparison of different methods. 6. Timing of the program execution, communication vs. computing time - optimization of the number of threads, efficiency. 7. Point to point parallelization in MPI, MPI_Send, MPI_Recv). 8. Commands for creation and management of MPI parallel environment, , execution of parallel tasks on multiple nodes, queue submission systems. 9. Differences in parallelization with respect to cooperation of threads during the calculation. Shared vs. distributed memory, MPI vs. OpenMP parallel environments, shared vs. private variables. 10. Parallelization in OpenMP, basic commands (pragma, parallelization of cycles, reduction). 11. Parallelization in OpenMP, advanced commands (pragma single/master, critical/atomic/ordered, parallelization of sections). 12.-13. Programming on the graphic cards (GPU) in OpenCL, host + kernel, passing of arguments, setting number of GPU threads. Content of tutorials/seminar: The content of the exercises follows the content of the lectures. After the initial setup of the access to the parallel cluster, tasks are solved using MPI (calculating the sum of numbers and factorial, calculating a definite integral, calculating the energy of a configuration of molecules including running on multiple machines and determining time efficiency), followed by OpenMP. Further tasks practice the commands and functions discussed. Problems with big matrices and energy calculations are also solved in OpenCL. The goal is to teach students to solve problems ranging from simple ones, where individual threads can work independently with data, to more complex ones, such as parallelizing the sorting of a data set.
|
Learning activities and teaching methods
|
Monologic (reading, lecture, briefing), Dialogic (discussion, interview, brainstorming), Work with text (with textbook, with book), Demonstration, Laboratory
- Class attendance
- 39 hours per semester
- Preparation for classes
- 26 hours per semester
- Preparation for credit
- 23 hours per semester
- Preparation for exam
- 24 hours per semester
|
Learning outcomes
|
Introduction to CPU parallelization, methods of parallelization, its HW and SW implementation using MPI and OpenMP. Short introduction to parallelization on GPU - graphical cards. Solution of sample practical tasks, work in a parallel environment.
Knowledge of MPI and OpenMP parallelization, knowledge of principles of GPU parallelization. Capability to write a parallel source code and/or parallelize serial code.
|
Prerequisites
|
Knowledge of programming in any programming language (ideally C, which is used in the course), basics of working in Linux (at command prompt, can be quickly gained).
|
Assessment methods and criteria
|
Oral examination, Test
Requirement for obtaining the course credit (zápočet): Solving the credit task in the final practical test. Requirement for passing the exam: At least 50% knowledge of the topic contained in the two drawn test questions.
|
Recommended literature
|
-
" RUUD VAN DER PAS, ERIC STOTZER, CHRISTIAN TERBOVEN. Using OpenMP-The Next Step: Affinity, Accelerators, Tasking, and SIMD, MIT Press; 1st edition 2017, ISBN: 978-0262534789.
-
BARBARA CHAPMAN, GABRIELE JOST, RUUD VAN DER PAS. Using OpenMP: Portable Shared Memory Parallel Programming, The MIT Press, 2007..
-
GERASSIMOS BARLAS. Multicore and GPU Programming: An Integrated Approach, Morgan Kaufmann 2014, ISBN: 978-0124171374.
-
Chandra, R., Dagum, L., Kohr, D., Dor Maydan. Parallel Programming in OpenMP. Morgan Kaufmann, 2000.
-
MICHAEL J. QUINN. Parallel Programming in C with MPI and OpenMP, McGraw Hill Higher Education, 2003..
-
WILLIAM GROPP, EWING LUSK, AND ANTHONY SKJELLUM. Using MPI - 3rd Edition: Portable Parallel Programming with the Message Passing Interface (Scientific and Engineering Computation), The MIT Press, 2014..
|