🤖 AI Summary
This work addresses the challenge of excessive memory and computational overhead in long-sequence inference with large language models, where the key-value (KV) cache grows linearly with sequence length. Existing cache eviction methods struggle to capture complex dependencies among KV pairs, often leading to significant performance degradation. To overcome this, we propose the first framework that formulates cache eviction as a Markov decision process. Our approach introduces the Golden Eviction algorithm, which generates optimal eviction trajectories based on future attention scores, and combines supervised learning with a Pairwise Ranking Loss and reinforcement learning via GRPO for joint optimization. Evaluated on the AIME2024/2025 benchmarks, our method achieves superior performance using only half the cache budget of prior approaches and effectively mitigates the sharp increase in language modeling loss caused by low-entropy tokens.
📝 Abstract
Recently, large language models (LLMs) have shown remarkable reasoning abilities by producing long reasoning traces. However, as the sequence length grows, the key-value (KV) cache expands linearly, incurring significant memory and computation costs. Existing KV cache eviction methods mitigate this issue by discarding less important KV pairs, but often fail to capture complex KV dependencies, resulting in performance degradation. To better balance efficiency and performance, we introduce ForesightKV, a training-based KV cache eviction framework that learns to predict which KV pairs to evict during long-text generations. We first design the Golden Eviction algorithm, which identifies the optimal eviction KV pairs at each step using future attention scores. These traces and the scores at each step are then distilled via supervised training with a Pairwise Ranking Loss. Furthermore, we formulate cache eviction as a Markov Decision Process and apply the GRPO algorithm to mitigate the significant language modeling loss increase on low-entropy tokens. Experiments on AIME2024 and AIME2025 benchmarks of three reasoning models demonstrate that ForesightKV consistently outperforms prior methods under only half the cache budget, while benefiting synergistically from both supervised and reinforcement learning approaches.