SF2568 Parallel Computations for Large- Scale Problems 7.5 credits
Advance course for giving a basic understanding of how to develop numerical algorithms and how these can be implemented on computers with distributed memory by using the message passing paradigm.
Choose semester and course offering
Choose semester and course offering to see information from the correct course syllabus and course offering.
Content and learning outcomes
- Basic ideas including hardware architectures, memory hierarchies, communications,
parallelization strategies, measures of efficiency;
- HPC and Green Computing;
- Introduction to MPI, the Message Passing Interface;
- Simple numerical algorithms including matrix operations, Gaussian elimination;
- Algorithms on graphs including graph partitioning problems;
- Parallel sorting;
- More advanced parallel algorithms;
- Standard libraries;
Intended learning outcomes
The goal of the course is to provide a basic understanding of how to develop algorithms and how to implement them in distributed memory computers using the message-passing paradigm.
After completion of the course components the student shall be able to:
- select and/or develop algorithms and data structures for solving a given problem after having analyzed and identified properties of the problem which have the potential for an efficient parallelization;
- theoretically analyze a given parallel algorithm with respect to efficiency and afterwards experimentally evaluate a program for parallel computing by running it on a high-performance computer;
- implement a given algorithm on a distributed-memory computer using the message passing library MPI;
- independently solve a more complex problem and present the results both orally and in writing in a scientific manner;
- identify challenges of Green Computing in HPC.
Literature and preparations
- English B / English 6
- Completed basic course in numerical analysis (SF1544, SF1545 or equivalent) and
- Completed basic course in computer science (DD1320 or equivalent).
Basic programming skills, preferably in C, C++, Fortran. For those being comfortable with Java or Python a short introduction to C will be provided.
Barry Wilkinson, Michael Allen: Parallel Programming, 2nd ed., Pearson Education International 2005, ISBN 0-13-191865-6.
Peter S. Pacheco: A Users Guide to MPI, downloadable from internet.
Michael Hanke: Lecture Notes.
Examination and completion
If the course is discontinued, students may request to be examined during the following two academic years.
- HEMA - Assignment, 4.5 credits, grading scale: A, B, C, D, E, FX, F
- PROA - Project, 3.0 credits, grading scale: A, B, C, D, E, FX, F
Based on recommendation from KTH’s coordinator for disabilities, the examiner will decide how to adapt an examination for students with documented disability.
The examiner may apply another examination format when re-examining individual students.
Opportunity to complete the requirements via supplementary examination
Opportunity to raise an approved grade via renewed examination
- All members of a group are responsible for the group's work.
- In any assessment, every student shall honestly disclose any help received and sources used.
- In an oral assessment, every student shall be able to present and answer questions about the entire assignment and solution.
Further information about the course can be found on the Course web at the link below. Information on the Course web will later be moved to this site.Course web SF2568
Main field of study
Please discuss with the course leader.