Our research focuses on the design of parallel programming models to tackle the computing challenges at exascale. We investigate the possibility of extending the Message Passing Interface (MPI) to support streaming and communication-offloading models on supercomputers to support scientific applications with irregular fine-grained communication.
We also study task-based programming approaches for shared-memory programming, such as OpenMP, Qthreads and Cilk. This is to provide automatic load-balancing in scientific applications. Specifically, we are looking at the possibility of designing and implementing locality-aware schedulers for task-based approaches.
Finally, we design a data-centric approach for HPC: computing is offloaded to where the data resides (computing node local memory, flash and NVRAM memories and I/O systems). This data-centric approach targets future exascale systems with highly hierarchical and diverse memory and I/O systems. In particular, we investigate the programmability of NVRAM memories and their use in exascale supercomputers.
- EPiGRAM, designing the Message-Passing and PGAS programming models for exascale
- SAGE, percipient storage for data centric exascale computing
- Intertwine, programming model interoperability towards exascale
- AllScale, recursive nested paralellism for exascale
- CRESTA, Collaborative research into exascale systemware, tools and applications