LightMamba: Efficient Mamba Acceleration on FPGA with Quantization and Hardware Co-design

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low inference efficiency of Mamba-like state space models (SSMs) on FPGAs and poor compatibility with existing LLM accelerators, this paper proposes a quantization–architecture co-optimization methodology. It introduces rotation-assisted quantization and power-of-two SSM quantization—enabling efficient 4-bit arithmetic—and designs an FPGA-specific architecture for partially unrolled Mamba computation, incorporating computation reordering, fine-grained tiling, and operator fusion. Post-training quantized deployments are implemented on Xilinx VCK190 and U280 platforms. On VCK190, the design achieves 4.65–6.06× higher energy efficiency than GPU baselines; on U280, it attains 93 tokens/s throughput—1.43× that of GPU baselines. This work presents the first systematic, high-efficiency, high-throughput FPGA acceleration of Mamba inference, demonstrating both significant hardware utilization improvements and practical deployment viability.

Technology Category

Application Category

📝 Abstract
State space models (SSMs) like Mamba have recently attracted much attention. Compared to Transformer-based large language models (LLMs), Mamba achieves linear computation complexity with the sequence length and demonstrates superior performance. However, Mamba is hard to accelerate due to the scattered activation outliers and the complex computation dependency, rendering existing LLM accelerators inefficient. In this paper, we propose LightMamba that co-designs the quantization algorithm and FPGA accelerator architecture for efficient Mamba inference. We first propose an FPGA-friendly post-training quantization algorithm that features rotation-assisted quantization and power-of-two SSM quantization to reduce the majority of computation to 4-bit. We further design an FPGA accelerator that partially unrolls the Mamba computation to balance the efficiency and hardware costs. Through computation reordering as well as fine-grained tiling and fusion, the hardware utilization and memory efficiency of the accelerator get drastically improved. We implement LightMamba on Xilinx Versal VCK190 FPGA and achieve 4.65x to 6.06x higher energy efficiency over the GPU baseline. When evaluated on Alveo U280 FPGA, LightMamba reaches 93 tokens/s, which is 1.43x that of the GPU baseline.
Problem

Research questions and friction points this paper is trying to address.

Accelerate Mamba on FPGA
Quantize Mamba for efficiency
Optimize hardware for Mamba inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

FPGA-friendly post-training quantization
Rotation-assisted quantization technique
Computation reordering and fine-grained tiling
🔎 Similar Papers
2024-03-042024 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops)Citations: 5
R
Renjie Wei
Institute for Artificial Intelligence & School of Integrated Circuits, Peking University, Beijing, China
S
Songqiang Xu
School of Software and Microelectronics, Peking University, Beijing, China
L
Linfeng Zhong
School of Electronic and Computer Engineering, Peking University, Shenzhen, China
Zebin Yang
Zebin Yang
Peking University
Efficient AI
Q
Qingyu Guo
School of Integrated Circuits, Peking University, Beijing, China
Y
Yuan Wang
Beijing Advanced Innovation Center for Integrated Circuits, Beijing, China
R
Runsheng Wang
Beijing Advanced Innovation Center for Integrated Circuits & Institute of Electronic Design Automation, Peking University, Wuxi, China
M
Meng Li
Institute for Artificial Intelligence & School of Integrated Circuits & Beijing Advanced Innovation Center for Integrated Circuits, Peking University, Beijing, China