🤖 AI Summary
To address the low inference efficiency of Mamba-like state space models (SSMs) on FPGAs and poor compatibility with existing LLM accelerators, this paper proposes a quantization–architecture co-optimization methodology. It introduces rotation-assisted quantization and power-of-two SSM quantization—enabling efficient 4-bit arithmetic—and designs an FPGA-specific architecture for partially unrolled Mamba computation, incorporating computation reordering, fine-grained tiling, and operator fusion. Post-training quantized deployments are implemented on Xilinx VCK190 and U280 platforms. On VCK190, the design achieves 4.65–6.06× higher energy efficiency than GPU baselines; on U280, it attains 93 tokens/s throughput—1.43× that of GPU baselines. This work presents the first systematic, high-efficiency, high-throughput FPGA acceleration of Mamba inference, demonstrating both significant hardware utilization improvements and practical deployment viability.
📝 Abstract
State space models (SSMs) like Mamba have recently attracted much attention. Compared to Transformer-based large language models (LLMs), Mamba achieves linear computation complexity with the sequence length and demonstrates superior performance. However, Mamba is hard to accelerate due to the scattered activation outliers and the complex computation dependency, rendering existing LLM accelerators inefficient. In this paper, we propose LightMamba that co-designs the quantization algorithm and FPGA accelerator architecture for efficient Mamba inference. We first propose an FPGA-friendly post-training quantization algorithm that features rotation-assisted quantization and power-of-two SSM quantization to reduce the majority of computation to 4-bit. We further design an FPGA accelerator that partially unrolls the Mamba computation to balance the efficiency and hardware costs. Through computation reordering as well as fine-grained tiling and fusion, the hardware utilization and memory efficiency of the accelerator get drastically improved. We implement LightMamba on Xilinx Versal VCK190 FPGA and achieve 4.65x to 6.06x higher energy efficiency over the GPU baseline. When evaluated on Alveo U280 FPGA, LightMamba reaches 93 tokens/s, which is 1.43x that of the GPU baseline.