🤖 AI Summary
To address the low inference efficiency of DeepSeek-R1 671B (671B parameters) with Multi-Head Latent Attention (MLA) on multi-GPU single-node deployments (NVIDIA H20), this paper proposes the Efficient Transposed Attention Pipeline (ETAP). ETAP introduces KV cache transposition aligned to the M-dimension computation of WGMMA, eliminating redundant memory accesses and computations. It further integrates operator-level kernel reordering, mixed-precision pipelining, and an RMSE-constrained numerical stability mechanism. The design achieves theoretical scalability, strong hardware adaptability, and broad framework compatibility. Experiments under 64K sequence length and batch size 16 show that ETAP outperforms FlashMLA, FlashAttention-3, and FlashInfer by 2.78×, 5.24×, and 4.94×, respectively, while maintaining a low RMSE of 1.25×10⁻⁵.
📝 Abstract
Efficient inference of Multi-Head Latent Attention (MLA) is challenged by deploying the DeepSeek-R1 671B model on a single Multi-GPU server. This paper introduces FlashMLA-ETAP, a novel framework that enhances MLA inference for the single-instance deployment scenario on NVIDIA H20 GPUs. We propose the Efficient Transpose Attention Pipeline (ETAP), which reconfigures attention computation through transposition to align the KV context length with the (M)-dimension in WGMMA operations, significantly reducing redundant computations. FlashMLA-ETAP achieves a 2.78x speedup over FlashMLA at 64K sequence length (batch size 16), with 5.24x and 4.94x improvements over FlashAttention-3 and FlashInfer, respectively, while maintaining numerical stability with a 15.2x lower RMSE ((1.25 imes 10^{-5})) than FlashAttention-3. Furthermore, ETAP's design enables seamless integration into frameworks like FlashAttention-3 and FlashInfer, supported by a detailed theoretical analysis. Our work addresses a critical gap in resource-constrained inference, offering a scalable solution for mid-tier GPUs and paving the way for broader adoption in hardware-aware optimization. Code is available at https://github.com/pengcuo/FlashMLA-ETAP.