Titanus: Enabling KV Cache Pruning and Quantization On-the-Fly for LLM Acceleration

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the explosive growth of key-value (KV) caches in large language model (LLM) inference—causing prohibitive memory storage and access overhead as context length increases—this paper proposes a software-hardware co-designed real-time compression framework. Our method introduces: (1) cascade pruning-quantization (CPQ), a novel joint optimization of sparsity and quantization accuracy; (2) a hierarchical quantization scheme that explicitly models inter-channel dependencies to improve reconstruction fidelity; and (3) a two-stage design-space exploration with customized parallel dataflow, significantly reducing first-token latency. Hardware support includes sparse KV transmission and sub-8-bit arithmetic. Experiments demonstrate that, compared to an NVIDIA A100 GPU, our approach achieves 159.9× higher energy efficiency and 49.6× higher throughput; versus FlightLLM, it delivers 34.8× and 29.2× improvements, respectively—all without any accuracy degradation.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have gained great success in various domains. Existing systems cache Key and Value within the attention block to avoid redundant computations. However, the size of key-value cache (KV cache) is unpredictable and can even be tens of times larger than the weights in the long context length scenario. In this work, we propose Titanus, a software-hardware co-design to efficiently compress the KV cache on-the-fly. We first propose the cascade pruning-quantization (CPQ) method to reduce the KV cache movement. The hierarchical quantization extension strategy is introduced to tackle the non-independent per-channel quantization issue. To further reduce KV cache movement, we transfer only the non-zero KV cache between the accelerator and off-chip memory. Moreover, we customize a two-stage design space exploration framework for the CPQ method. A novel pipeline and parallelism dataflow is designed to reduce the first token generation time. Experiments show that Titanus achieves 159.9x (49.6x) and 34.8x (29.2x) energy efficiency (throughput) compared to Nvidia A100 GPU and FlightLLM respectively. The code for Titanus is available at https://github.com/peilin-chen/Titanus-for-LLM-acceleration.
Problem

Research questions and friction points this paper is trying to address.

Reduce KV cache size for LLM acceleration
Compress KV cache dynamically with pruning-quantization
Optimize energy efficiency and throughput in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cascade pruning-quantization method for KV cache
Hierarchical quantization extension strategy
Two-stage design space exploration framework
🔎 Similar Papers
No similar papers found.