🤖 AI Summary
This work addresses the challenge of efficiently executing stencil computations on Cerebras’ wafer-scale engine (WSE), whose asynchronous distributed architecture complicates high-performance implementation. The paper presents the first MLIR-based automatic compilation pipeline that maps mathematically specified stencil kernels to highly optimized CSL code without requiring any modification to the application source. By deeply integrating stencil domain semantics with the hardware characteristics of WSE2 and WSE3, the approach enables fully automated code generation and optimization. Evaluated on five HPC benchmarks, the automatically generated code matches or slightly exceeds the performance of hand-optimized implementations. On WSE3, it achieves approximately 14× speedup over a 128-GPU A100 system and about 20× speedup compared to a 128-node CPU-based supercomputer.
📝 Abstract
The Cerebras Wafer-Scale Engine (WSE) delivers performance at an unprecedented scale of over 900,000 compute units, all connected via a single-wafer on-chip interconnect. Initially designed for AI, the WSE architecture is also well-suited for High Performance Computing (HPC). However, its distributed asynchronous programming model diverges significantly from the simple sequential or bulk-synchronous programs that one would typically derive for a given mathematical program description. Targeting the WSE requires a bespoke re-implementation when porting existing code. The absence of WSE support in compilers such as MLIR, meant that there was little hope for automating this process. Stencils are ubiquitous in HPC, and in this paper we explore the hypothesis that domain specific information about stencils can be leveraged by the compiler to automatically target the WSE without requiring application-level code changes. We present a compiler pipeline that transforms stencil-based kernels into highly optimized CSL code for the WSE, bridging the semantic gap between the mathematical representation of the problem and the WSE's asynchronous execution model. Based upon five benchmarks across three HPC programming technologies, running on both the Cerebras WSE2 and WSE3, our approach delivers comparable, if not slightly better, performance than manually optimized code. Furthermore, without requiring any application level code changes, performance on the WSE3 is around 14 times faster than 128 Nvidia A100 GPUs and 20 times faster than 128 nodes of a CPU-based Cray-EX supercomputer when using our approach.