π€ AI Summary
This work proposes a novel architecture-aware collective communication algorithm to address the all-to-all communication bottleneck on emerging many-core supercomputers. By holistically considering message size, process count, node topology, and system partitioning, the algorithm optimizes data scheduling and communication pathways. Evaluated on a 32-node system based on Intel Sapphire Rapids processors, the proposed method achieves up to a 3Γ speedup over state-of-the-art MPI implementations, significantly enhancing communication efficiency for applications such as fast Fourier transforms, matrix transposition, and machine learning workloads.
π Abstract
Performant all-to-all collective operations in MPI are critical to fast Fourier transforms, transposition, and machine learning applications. There are many existing implementations for all-to-all exchanges on emerging systems, with the achieved performance dependent on many factors, including message size, process count, architecture, and parallel system partition. This paper presents novel all-to-all algorithms for emerging many-core systems. Further, the paper presents a performance analysis against existing algorithms and system MPI, with novel algorithms achieving up to 3x speedup over system MPI at 32 nodes of state-of-the-art Sapphire Rapids systems.CCS Conceptsβ’ Computing methodologies β Parallel computing methodologies; Parallel algorithms; Massively parallel algorithms; Concurrent algorithms.