Skip to main content

Parallel Computing

Our research focuses on the design of parallel programming models to tackle the computing challenges at exascale. We investigate the possibility of extending the Message Passing Interface (MPI) to support streaming and communication-offloading models on supercomputers to support scientific applications with irregular fine-grained communication.

We also study task-based programming approaches for shared-memory programming, such as OpenMP, Qthreads and Cilk. This is to provide automatic load-balancing in scientific applications. Specifically, we are looking at the possibility of designing and implementing locality-aware schedulers for task-based approaches.

Finally, we design a data-centric approach for HPC: computing is offloaded to where the data resides (computing node local memory, flash and NVRAM memories and I/O systems). This data-centric approach targets future exascale systems with highly hierarchical and diverse memory and I/O systems. In particular, we investigate the programmability of NVRAM memories and their use in exascale supercomputers.

Performance characterization of streaming computing on supercomputer




  • CRESTA, Collaborative research into exascale systemware, tools and applications  
Page responsible:Web editors at EECS
Belongs to: Computational Science and Technology
Last changed: Apr 30, 2021