PAM: Processing Across Memory Hierarchy for Efficient KV-centric LLM Serving System

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance bottleneck in large language model (LLM) serving caused by the high bandwidth and substantial memory capacity demands of key-value (KV) caching. To overcome this challenge, the authors propose PAM, a processing-in-memory (PIM)-based heterogeneous architecture tailored for KV operations. PAM integrates context-locality-aware KV cache distribution, a fine-grained cross-device parallel attention algorithm (PAMattention), and an online dynamic scheduling mechanism to co-optimize computation and storage across the memory hierarchy. This system is the first to jointly tackle both bandwidth and capacity constraints of KV caches, significantly enhancing LLM inference throughput and scalability while enabling low-latency, energy-efficient large-scale deployment.

Technology Category

Application Category

📝 Abstract
The widespread adoption of Large Language Models (LLMs) has exponentially increased the demand for efficient serving systems. With growing requests and context lengths, key-value (KV)-related operations, including attention computation and KV cache storage, have emerged as critical bottlenecks. They require massive memory bandwidth and capacity. Unfortunately, existing LLM serving systems, optimized for compute-bound workloads, fail to handle these memory-intensive operations effectively. Even with Processing-In-Memory (PIM) technology, current single-level memory designs cannot simultaneously satisfy the bandwidth and capacity requirements. To address these challenges, we propose Processing Across Memory (PAM), a KV-centric LLM serving system that coordinates heterogeneous PIM-enabled memory devices within a hierarchical architecture. PAM introduces a novel computing paradigm to balance high memory bandwidth with scalable capacity. First, PAM exploits the inherent context locality in KV access patterns to intelligently distribute KV tokens across the memory hierarchy. Second, to further exploit context locality, it introduces the PAMattention algorithm, enabling fine-grained parallel attention computation across heterogeneous PIM devices. Finally, PAM incorporates an intra-device KV mapping, inter-device KV migration interface, and an inter-device online KV scheduling algorithm to dynamically balance computational workloads. By addressing both bandwidth and capacity demands simultaneously, PAM significantly enhances the efficiency and scalability of LLM serving systems, paving the way for cost-effective, high-performance solutions in the era of large-scale AI.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
KV cache
memory bottleneck
memory bandwidth
memory capacity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Processing-In-Memory (PIM)
KV cache
memory hierarchy
attention computation
LLM serving
🔎 Similar Papers
No similar papers found.
L
Lian Liu
Institute of Computing Technology, CAS, University of Chinese Academy of Sciences, Beijing, China
S
Shixin Zhao
Institute of Computing Technology, CAS, University of Chinese Academy of Sciences, Beijing, China
Y
Yutian Zhou
Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing, China
Y
Yintao He
Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing, China
Mengdi Wang
Mengdi Wang
Institute of Computing Technology, Chinese Academy of Sciences
accelerator architecture designmulti-core system
Y
Yinhe Han
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
Ying Wang
Ying Wang
Institute of Computing Technology, Chinese Academy of Sciences
Reliable Computer ArchitectureVLSI designMachine learningMemory system