🤖 AI Summary
Traditional MPI C interfaces lack type safety and generic programming support, hindering the adoption of modern C++ in high-performance computing (HPC). To address this, we propose a layout-agnostic message-passing abstraction that—leveraging the Noarr library—introduces first-class data layout and traversal abstractions into the MPI communication layer for the first time. Our approach decouples communication semantics from memory layout while enabling type-safe, generic, and composable MPI interfaces. By tightly integrating modern C++ template metaprogramming with low-level MPI mechanisms, our design preserves near-native MPI performance while significantly improving interface safety, expressiveness, and modularity. We evaluate the framework using distributed GEMM as a case study: results demonstrate enhanced code reusability, greater development flexibility, and zero computational overhead relative to baseline MPI implementations.
📝 Abstract
Message Passing Interface (MPI) has been a well-established technology in the domain of distributed high-performance computing for several decades. However, one of its greatest drawbacks is a rather ancient pure-C interface. It lacks many useful features of modern languages (namely C++), like basic type-checking or support for generic code design. In this paper, we propose a novel abstraction for MPI, which we implemented as an extension of the C++ Noarr library. It follows Noarr paradigms (first-class layout and traversal abstraction) and offers layout-agnostic design of MPI applications. We also implemented a layout-agnostic distributed GEMM kernel as a case study to demonstrate the usability and syntax of the proposed abstraction. We show that the abstraction achieves performance comparable to the state-of-the-art MPI C++ bindings while allowing for a more flexible design of distributed applications.