Parallel programming solves big numerical problems by dividing them into smaller sub-tasks, and hence reduces the overall computational time on multi-processor and/or multi-core machines. Parallel programming is well supported in traditional programming languages like C and FORTRAN, which are suitable for “heavy-duty” computational tasks. Traditionally, Python is considered to not support parallel programming very well, partly because of the global interpreter lock (GIL). However, things have changed over time. Thanks to the development of a rich variety of libraries and packages, the support for parallel programming in Python is now much better.
This post (and the following part) will briefly introduce the multiprocessing module in Python, which effectively side-steps the GIL by using subprocesses instead of threads. The multiprocessing module provides many useful features and is very suitable for symmetric multiprocessing (SMP) and shared memory systems. In this post we focus on the Pool class of the multiprocessing module, which controls a pool of worker processes and supports both synchronous and asynchronous parallel execution.
This entry was posted on by Kjartan Thor Wikfeldt.
Jupyter Notebooks are gaining in popularity across many academic and industrial research fields . The in-browser, cell-based user interface of the Notebook application enables researchers to interleave code (in many different programming languages) with rich text, graphics, equations, and so forth. Typical use cases involve quick prototyping of code, interactive data analysis and visualisation, keeping digital notebooks for daily tasks, and as a teaching tool. Notebooks are also being used for reproducible workflows and to share scientific analysis with colleagues or whole research communities. High Performance Computing (HPC) is rapidly catching up with this trend and many HPC providers, including PDC, now offer Jupyter Notebooks as a way to interact with their HPC resources.
In October last year, PDC organised a workshop on “HPC Tools for the Modern Era“. One of the workshop modules focused on possible use cases of Jupyter Notebooks in an HPC environment. The notebooks that were used during the workshop are available from the PDC Support GitHub repository (https://github.com/PDC-support/jupyter-notebook), and the rest of this post discusses an example use case based on one of those workshop notebooks.
Jupyter Notebooks are suitable for various HPC usage patterns and workflows. In this blog post we will demonstrate one possible use case: Interacting with the SLURM scheduler from a notebook running in your browser, submitting and monitoring batch jobs and performing light-weight interactive analysis of running jobs. Note that this usage of Jupyter is possible on both Tegner and Beskow. In a future blog post, we will demonstrate how to run interactive analysis directly on a Tegner compute node to perform heavy analysis on large datasets, which might, for instance, be generated from other jobs running at Tegner or Beskow (since both clusters share the klemming file system). This use case will however not be possible on Beskow since the Beskow compute nodes have restricted network access to the outside world.
The following overview will assume that you have some familiarity with Jupyter Notebooks, but if you’ve never tried them out, there are plenty of online resources to get you started. Apart from using the notebooks from the PDC/PRACE workshop (which were mentioned earlier), you can, for example:
High performance computing (HPC) clusters are able to solve big problems using a large number of processors. This is also known as parallel computing, where many processors work simultaneously to produce exceptional computational power and to significantly reduce the total computational time. In such scenarios, scalability or scaling is widely used to indicate the ability of hardware and software to deliver greater computational power when the amount of resources is increased. For HPC clusters, it is important that they are scalable, in other words that the capacity of the whole system can be proportionally increased by adding more hardware. For software, scalability is sometimes referred to as parallelization efficiency — the ratio between the actual speedup and the ideal speedup obtained when using a certain number of processors.
In this post we focus on software scalability and discuss two common types of scaling. The speedup in parallel computing can be straightforwardly defined as
speedup = t1 / tN
where t1 is the computational time for running the software using one processor, and tN is the computational time running the same software with N processors. Ideally, we would like software to have a linear speedup that is equal to the number of processors (speedup = N), as that would mean that every processor would be contributing 100% of its computational power. Unfortunately, this is a very challenging goal for real applications to attain.
Sometimes it can be a daunting task to get all the Kerberos and SSH configurations right on your first attempt at using the PDC systems. Nearly every day PDC Support receives a number of help requests and questions from researchers who have run into configuration problems. So PDC has introduced an alternative way of logging in to our clusters by using Docker containers with pre-configured Kerberos and SSH files.
What is Docker?
Docker is a tool used to deploy and run single or many applications within what are known as containers. Containers are packed with all the parts that are needed to run an application, such as the actual application, any relevant tools, libraries or other necessary information. Each Docker container is delivered as a single package with all the necessary material included. This means that the person using the Docker container does not have to worry about installing any new tools/libraries or configuring them before using them. Since everything is preloaded, the applications inside the containers are ready-to-use and can be executed regardless of any customized settings on the host machine.
Later in this blog article, we will talk about a Docker container that we have developed for the purpose of logging in to PDC clusters.
Our supercomputer clusters at PDC, equipped with thousands of multi-core processors, can be used to solve large scientific/engineering problems. Because of their much higher performance compared to desktop computers or workstations, supercomputer clusters are also called high performance computing (HPC) clusters. Common application fields of HPC clusters include machine learning, galaxy simulation, climate modelling, bioinformatics, computational physics, quantum chemistry, etc.
Building an HPC cluster demands sophisticated technologies and hardware, but fortunately a regular HPC user doesn’t have to worry too much about that. As an HPC user you can submit jobs that request the compute nodes (physical groups of processors) to do the calculations/simulations you want. But note that you are not the only user of an HPC cluster, there are typically many users using the cluster for the same time period and all of them will be submitting their own jobs. You may have realized by now that there needs to be some soft of queueing system that organizes the jobs and distributes them to the compute nodes. This post will briefly introduce SLURM, which is used in all PDC clusters and is the most widely used workload manager for HPC clusters.
What is SLURM?
SLURM, or Simple Linux Utility for Resource Management, is an open-source cluster management and job scheduling system. It provides three key functions
Allocation of resources (compute nodes) to users’ jobs
Framework for starting/executing/monitoring jobs
Queue management to avoid resource contention
In other words, SLURM oversees all the resources in the whole HPC cluster. Users then send their jobs (requests to run calculations/simulations) to SLURM for later execution. SLURM will keep all the submitted jobs in a queue and decide what priorities the jobs have and how the jobs are distributed to the compute nodes. SLURM provides a series of useful commands for the user which we will now go through.