Scalable Processing-Near-Memory for 1M-Token LLM Inference: CXL-Enabled KV-Cache Management Beyond GPU Limits

📅 2025-10-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address GPU memory bottlenecks and high data migration overhead induced by KV cache management in million-token-context LLM inference, this paper proposes a CXL-based near-memory computing architecture. It offloads token-page selection logic to a near-memory accelerator co-located with CXL memory, eliminating frequent KV data transfers back to the GPU. A hybrid parallelization strategy and a steady-state token mechanism are introduced to enhance scalability. The architecture supports paged KV cache management and enables three execution modes: GPU-only, PNM-only (persistent near-memory), and GPU-PNM collaborative execution. Evaluated on a 405B-parameter model with 1M-token context, it achieves 21.9× higher throughput, 60× lower energy per token, and 7.3× improved total cost efficiency—significantly advancing energy efficiency and scalability for ultra-long-context LLM inference.

Technology Category

Application Category

📝 Abstract
The expansion of context windows in large language models (LLMs) to multi-million tokens introduces severe memory and compute bottlenecks, particularly in managing the growing Key-Value (KV) cache. While Compute Express Link (CXL) enables non-eviction frameworks that offload the full KV-cache to scalable external memory, these frameworks still suffer from costly data transfers when recalling non-resident KV tokens to limited GPU memory as context lengths increase. This work proposes scalable Processing-Near-Memory (PNM) for 1M-Token LLM Inference, a CXL-enabled KV-cache management system that coordinates memory and computation beyond GPU limits. Our design offloads token page selection to a PNM accelerator within CXL memory, eliminating costly recalls and enabling larger GPU batch sizes. We further introduce a hybrid parallelization strategy and a steady-token selection mechanism to enhance compute efficiency and scalability. Implemented atop a state-of-the-art CXL-PNM system, our solution delivers consistent performance gains for LLMs with up to 405B parameters and 1M-token contexts. Our PNM-only offloading scheme (PNM-KV) and GPU-PNM hybrid with steady-token execution (PnG-KV) achieve up to 21.9x throughput improvement, up to 60x lower energy per token, and up to 7.3x better total cost efficiency than the baseline, demonstrating that CXL-enabled multi-PNM architectures can serve as a scalable backbone for future long-context LLM inference.
Problem

Research questions and friction points this paper is trying to address.

Managing KV-cache bottlenecks in large-context LLM inference
Reducing costly data transfers between GPU and external memory
Enabling scalable processing for million-token contexts beyond GPU limits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses CXL-enabled PNM for KV-cache management
Offloads token selection to PNM accelerator
Implements hybrid parallelization with steady-token mechanism
🔎 Similar Papers
No similar papers found.
Dowon Kim
Dowon Kim
Ulsan National Institute of Science and Technology (UNIST)
Colorimetric sensorPolydiacetyleneChemical Warfare Agent
M
MinJae Lee
Hanyang University
J
Janghyeon Kim
Hanyang University
H
HyuckSung Kwon
Hanyang University
H
Hyeonggyu Jeong
Hanyang University
S
Sang-Soo Park
Samsung Electronics
M
Minyong Yoon
Samsung Electronics
S
Si-Dong Roh
Samsung Electronics
Y
Yongsuk Kwon
Samsung Electronics
J
Jinin So
Samsung Electronics
Jungwook Choi
Jungwook Choi
Hanyang University
Deep Neural NetworkQuantizationLarge Language ModelEfficient AIAI Accelerator