🤖 AI Summary
To address the low single-batch inference efficiency of large language models (LLMs) on FPGAs and the poor hardware utilization of arithmetic-intensive operations, this work proposes a memory-centric, lookup-table (LUT)-based inference paradigm: computationally intensive operations are replaced by vector-quantized lookups in on-chip memory, integrated with joint activation-weight quantization and a spatio-temporal hybrid architecture—significantly reducing memory bandwidth and cache pressure. We present the first fully on-chip inference implementation of a >1B-parameter model (Qwen3-1.7B) on AMD V80 FPGA. Experiments show 1.66× lower latency than AMD MI210 and 1.72× higher energy efficiency than NVIDIA A100; scaling to 32B models retains a 2.16× energy-efficiency advantage. Key innovations include an FPGA-aware, low-latency LUT mechanism and a bandwidth-aware parallel centroid search design.
📝 Abstract
The rapid progress of large language models (LLMs) has advanced numerous applications, yet efficient single-batch inference remains vital for on-device intelligence. While FPGAs offer fine-grained data control and high energy efficiency, recent GPU optimizations have narrowed their advantage, especially under arithmetic-based computation. To overcome this, we leverage FPGAs'abundant on-chip memory to shift LLM inference from arithmetic- to memory-based computation through table lookups. We present LUT-LLM, the first FPGA accelerator enabling 1B+ LLM inference via vector-quantized memory operations. Our analysis identifies activation-weight co-quantization as the most effective scheme, supported by (1) bandwidth-aware parallel centroid search, (2) efficient 2D table lookups, and (3) a spatial-temporal hybrid design minimizing data caching. Implemented on an AMD V80 FPGA for a customized Qwen 3 1.7B model, LUT-LLM achieves 1.66x lower latency than AMD MI210 and 1.72x higher energy efficiency than NVIDIA A100, scaling to 32B models with 2.16x efficiency gain over A100.