Where Matters More Than What: Decoding-aligned KV Cache Compression via Position-aware Pseudo Queries

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing KV cache compression methods rely on attention patterns from the prefill phase, which often fail to accurately predict critical tokens during generation, leading to performance degradation. This work proposes DapQ, a lightweight framework that innovatively employs position-aware pseudo-queries to simulate decoding-phase query behavior, thereby constructing an importance evaluation window aligned with the generation process for precise token pruning. The study reveals that positional information is more crucial than semantic content in crafting decoding-aligned pseudo-queries. Evaluated across diverse large language models and benchmarks, DapQ significantly outperforms existing approaches, achieving near-lossless performance of 99.5% on the NIAH task with only a 3% KV cache budget.

Technology Category

Application Category

📝 Abstract
The Key-Value (KV) cache is crucial for efficient Large Language Models (LLMs) inference, but excessively long contexts drastically increase KV cache memory footprint. Existing KV cache compression methods typically rely on input-side attention patterns within a prompt observation window to estimate token importance during the prefill stage. They fail to preserve critical tokens for future generation since these assessments are not derived from the decoding process. Intuitively, an effective observation window should mirror the decoding-stage queries to accurately reflect which tokens the generation process will attend to. However, ground-truth decoding queries are inherently unavailable during inference. For constructing pseudo queries to approximate them, we find that positional information plays a more critical role than semantic content. Motivated by this insight, we propose decoding-aligned KV cache compression via position-aware pseudo queries (DapQ), a novel and lightweight eviction framework that leverages position-aware pseudo queries to simulate the output tokens, thereby establishing an effective observation window for importance assessment. It aligns closely with the actual generation context and enables precise token eviction. Extensive evaluations across multiple benchmarks and LLMs demonstrate that DapQ achieves superior performance, particularly under strict memory constraints (e.g., up to nearly lossless performance 99.5% on NIAH with 3% KV cache budgets).
Problem

Research questions and friction points this paper is trying to address.

KV cache compression
Large Language Models
decoding alignment
memory efficiency
token importance
Innovation

Methods, ideas, or system contributions that make the work stand out.

KV cache compression
position-aware pseudo queries
decoding-aligned
LLM inference
token eviction
🔎 Similar Papers
No similar papers found.