Writing parallel applications using MPI


Message Passing is presently the most widely deployed programming model in massively parallel high performance computing. Message passing is suitable for programming a wide range of current computer architectures, ranging from multi-core desktop equipment to the fastest HPC systems in the world, offering several hundred thousand processing elements.

Time: Mon 2017-12-11 09.00 - Wed 2017-12-13 17.00

Lecturer: Erwin Laure and other PDC staff

Location: Room 523, 5th floor Teknikringen 14, PDC, KTH main campus

Description

This three day course will teach you how to write parallel programs using MPI. The course will be delivered by Erwin Laure and other PDC staff.

The course is at beginners level and assumes no prior experience in parallel computing. The concepts behind message passing and distributed memory computing will be introduced and the syntax of the key MPI calls will be explained. The course will include point-to-point communications, non-blocking communication and collective communications calls. Practical sessions to deepen your understanding of the lectures will be part of the course. At the end of the course, participants should be able to write their own MPI programs at an intermediate level. The teaching language for the course will be English.

Prerequisites

You will need to have an account on the PDC systems in order to attend and participate in the computer labs for the course. Please apply for an account in good time before the course if you do not already have a PDC account. Participants should be able to write programs in either C or Fortran.

Agenda and course material

To be announced.

Contact:

PDC Support

2017-12-11T09:00 2017-12-13T17:00 Writing parallel applications using MPI Writing parallel applications using MPI
Top page top