🤖 AI Summary
This work addresses the performance bottleneck in large language model (LLM) serving caused by the high bandwidth and substantial memory capacity demands of key-value (KV) caching. To overcome this challenge, the authors propose PAM, a processing-in-memory (PIM)-based heterogeneous architecture tailored for KV operations. PAM integrates context-locality-aware KV cache distribution, a fine-grained cross-device parallel attention algorithm (PAMattention), and an online dynamic scheduling mechanism to co-optimize computation and storage across the memory hierarchy. This system is the first to jointly tackle both bandwidth and capacity constraints of KV caches, significantly enhancing LLM inference throughput and scalability while enabling low-latency, energy-efficient large-scale deployment.
📝 Abstract
The widespread adoption of Large Language Models (LLMs) has exponentially increased the demand for efficient serving systems. With growing requests and context lengths, key-value (KV)-related operations, including attention computation and KV cache storage, have emerged as critical bottlenecks. They require massive memory bandwidth and capacity. Unfortunately, existing LLM serving systems, optimized for compute-bound workloads, fail to handle these memory-intensive operations effectively. Even with Processing-In-Memory (PIM) technology, current single-level memory designs cannot simultaneously satisfy the bandwidth and capacity requirements. To address these challenges, we propose Processing Across Memory (PAM), a KV-centric LLM serving system that coordinates heterogeneous PIM-enabled memory devices within a hierarchical architecture. PAM introduces a novel computing paradigm to balance high memory bandwidth with scalable capacity. First, PAM exploits the inherent context locality in KV access patterns to intelligently distribute KV tokens across the memory hierarchy. Second, to further exploit context locality, it introduces the PAMattention algorithm, enabling fine-grained parallel attention computation across heterogeneous PIM devices. Finally, PAM incorporates an intra-device KV mapping, inter-device KV migration interface, and an inter-device online KV scheduling algorithm to dynamically balance computational workloads. By addressing both bandwidth and capacity demands simultaneously, PAM significantly enhances the efficiency and scalability of LLM serving systems, paving the way for cost-effective, high-performance solutions in the era of large-scale AI.