DN2264 Parallel Computations for Large-Scale Problems, Part 1 6.0 credits
This course has been discontinued.
Last planned examination: Spring 2017
Decision to discontinue this course:
No information insertedContent and learning outcomes
Course contents
Basic ideas including hardware architectures, memory hierarchies, communications, parallelization strategies, measures of efficiency;
Simple numerical algorithms including matrix operations, Gaussian elimination;
Algorithms on graphs including graph partitioning problems;
Parallel sorting;
More advanced parallel problems including the n-body problem;
Advanced numerical methods including multi-grid and FFT methods;
Standard libraries.
Intended learning outcomes
The overall goal of the course is to provide a basic understanding of how to develop algorithms and how to implement them in distributed memory computers using the message-passing paradigm.
This understanding means that after the course you are able to
- explain parallelization strategies;
- select and/or develop an algorithm for solving a given problem which has the potential for an efficient parallelization;
- select and/or develop data structures for implementing parallel computations;
- theoretically analyze a given parallel algorithm with respect to efficiency;
- implement a given algorithm on a distributed-memory computer using the message passing library MPI;
- understand the message flow and avoid unwanted situations (e.g. deadlock, synchronization delays);
- modify and adapt a set of basic routines to special situations;
- experimentally evaluate the performance of a parallel program;
- explain differences between the theoretically expected performance and the practically observed performance.
Literature and preparations
Specific prerequisites
Single course students: 90 university credits including 45 university credits in Mathematics or Information Technology. English B, or equivalent.
Recommended prerequisites
Basic courses in numerical analysis equivalent to DN1212 or DN1240, computer science equivalent to DD1320 or DD1321, preferably in C, C++, Fortran. For those being comfortable with Java or Python a short introduction to C will be provided.
Equipment
Literature
Barry Wilkinson, Michael Allen: Parallel Programming, 2nd ed., Pearson Education International 2005, ISBN 0-13-191865-6.
Peter S. Pacheco: A Users Guide to MPI, to buy at the students' office. Michael Hanke: Lecture Notes, to buy at the students' office.
Examination and completion
If the course is discontinued, students may request to be examined during the following two academic years.
Grading scale
Examination
- HEM1 - Assignment, 3.0 credits, grading scale: P, F
- LAB1 - Laboratory Work, 3.0 credits, grading scale: A, B, C, D, E, FX, F
Based on recommendation from KTH’s coordinator for disabilities, the examiner will decide how to adapt an examination for students with documented disability.
The examiner may apply another examination format when re-examining individual students.
In this course all the regulations of the code of honor at the School of Computer science and Communication apply, see: http://www.kth.se/csc/student/hederskodex/1.17237?l=en_UK.
Other requirements for final grade
Homework and a mid-term quiz (HEM1; 3 university credits)
Lab report (LAB1; 3 university credits)
Opportunity to complete the requirements via supplementary examination
Opportunity to raise an approved grade via renewed examination
Examiner
Ethical approach
- All members of a group are responsible for the group's work.
- In any assessment, every student shall honestly disclose any help received and sources used.
- In an oral assessment, every student shall be able to present and answer questions about the entire assignment and solution.
Further information
Course room in Canvas
Offered by
Main field of study
Education cycle
Add-on studies
DN2265 Parallel Computations for Large-Scale Problems, part 2.
Contact
Supplementary information
Please observe that the first lesson is compulsary to attend.