π€ AI Summary
To address the low dequantization efficiency and poor Tensor Core throughput alignment of W4A8-quantized GEMM operations for large language model inference on CUDA cores, this paper proposes LiquidGEMMβa hardware-efficient kernel. Our approach introduces three key innovations: (1) LiquidQuant, a quantization scheme enabling safe, low-overhead dequantization of four weights in just two instructions; (2) an implicit fine-grained pipeline that fully overlaps weight loading, dequantization, and matrix multiply-accumulate operations, eliminating software synchronization overhead; and (3) co-optimization of CUDA cores and Tensor Cores with implicit inter-warp-group pipelined scheduling. Experiments demonstrate that LiquidGEMM achieves up to 2.90Γ peak speedup over state-of-the-art W4A8 kernels and delivers 4.94Γ end-to-end inference acceleration. Integrated into TensorRT-LLM, it yields 1.12β1.63Γ performance improvement across diverse models.
π Abstract
Quantization is a critical technique for accelerating LLM inference by reducing memory footprint and improving computational efficiency. Among various schemes, 4-bit weight and 8-bit activation quantization (W4A8) offers a strong balance between accuracy and performance. However, existing W4A8 GEMM kernels fall short in practice due to inefficient dequantization on CUDA Cores, which cannot keep pace with the high throughput of Tensor Cores. In this paper, we present LiquidGEMM, a hardware-efficient W4A8 GEMM kernel for efficient LLM serving. LiquidGEMM designs two key techniques: LiquidQuant, a hardware-efficient quantization method that enables fast, overflow-safe dequantization using just two arithmetic instructions per four elements; and an implicit fine-grained pipeline that fully overlaps weight loading, dequantization, and MMA across warp groups without software synchronization or redundant memory traffic. Experimental results show that LiquidGEMM achieves up to 2.90x speedup over state-of-the-art W4A8 kernels and up to 4.94x end-to-end system-level speedup. Compared to various quantized GEMM kernels in NVIDIA TensorRT-LLM, LiquidGEMM delivers 1.12-1.63x performance gains, and achieves up to 1.63x system-level speedup.