🤖 AI Summary
Existing general-purpose compilers (e.g., LLVM, GCC) suffer from CPU-centric abstractions, unstructured intermediate representations (IRs), and hardware-agnostic optimization objectives, making them ill-suited for efficiently generating high-performance DNN microkernels targeting novel hardware—particularly RISC-V with custom ISA extensions. This paper proposes a multi-level structured IR backend paradigm that progressively lowers computation through hardware-aware abstractions, precisely mapping RISC-V’s hardware loops and streaming registers to exploit custom ISA features. It replaces conventional register allocation and spill heuristics with an incremental, domain-specific code generation strategy. Experiments demonstrate that our backend achieves up to 90% FPU utilization on key DNN microkernels—significantly outperforming LLVM and GCC—while delivering scalability, high fidelity, and domain specialization. The approach establishes a robust, extensible compilation infrastructure tailored for heterogeneous AI accelerators.
📝 Abstract
High-performance micro-kernels must fully exploit today's diverse and specialized hardware to deliver peak performance to DNNs. While higher-level optimizations for DNNs are offered by numerous compilers (e.g., MLIR, TVM, OpenXLA), performance-critical micro-kernels are left to specialized code generators or handwritten assembly. Even though widely-adopted compilers (e.g., LLVM, GCC) offer tuned backends, their CPU-focused input abstraction, unstructured IR, and general-purpose best-effort design inhibit tailored code generation for innovative hardware. We think it is time to widen the classical hourglass backend and embrace progressive lowering across a diverse set of structured abstractions to bring domain-specific code generation to compiler backends. We demonstrate this concept by implementing a custom backend for a RISC-V-based accelerator with hardware loops and streaming registers, leveraging knowledge about the hardware at levels of abstraction that match its custom ISA. We use incremental register allocation over structured IRs, while dropping classical spilling heuristics, and show up to 90% FPU utilization across key DNN kernels. By breaking the backend hourglass model, we reopen the path from domain-specific abstractions to specialized hardware.