Revisiting Multimodal KV Cache Compression: A Frequency-Domain-Guided Outlier-KV-Aware Approach

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) suffer from linear KV cache expansion due to high-resolution visual inputs, incurring substantial inference overhead. Existing KV compression methods—relying on attention scores—are incompatible with efficient attention kernels (e.g., FlashAttention) and neglect the actual contribution of value vectors to output generation. This paper proposes the first frequency-domain KV compression framework tailored for multimodal settings. Grounded in spectral energy distribution, it formally defines and identifies “outlier KV” tokens, employs low-pass filtering to retain dominant energy components, and integrates an outlier-aware, layer-adaptive dynamic budget allocation mechanism. Crucially, the method is fully compatible with mainstream efficient attention kernels. Experiments across multiple MLLMs demonstrate up to 1.69× decoding speedup and 80% KV memory reduction, with zero degradation in task performance.

Technology Category

Application Category

📝 Abstract
Multimodal large language models suffer from substantial inference overhead since multimodal KV Cache grows proportionally with the visual input length. Existing multimodal KV Cache compression methods mostly rely on attention score to reduce cache size, which makes them are incompatible with established efficient attention kernels (e.g., FlashAttention) and ignores the contribution of value vectors to the attention output. In this work, we revisit multimodal KV Cache compression from the perspective of the KV matrices' distribution. First, we observe that frequency-domain energy of multimodal KV matrices is predominantly concentrated in low-frequency and extract this principal energy via a low-pass filter. Further, we find that removing KV pairs that deviate substantially from this principal energy leads to a pronounced performance drop, which we define as Outlier KVs. Considering Outlier KVs are more likely to encode features critical for inference, we propose FlashCache, a frequency-domain-guided, Outlier-KV-aware KV Cache compression framework. First, we introduce an Outlier KV Recognition Module that models the principal component of multimodal KV matrices in the frequency domain and preferentially retains KV pairs that significantly deviate from it. Furthermore, Dynamic Budget Allocation Module is designed to adaptively determine the per-layer KV Cache size to retain more Outlier KVs. Experiments on multiple MLLMs and benchmarks demonstrate that FlashCache outperforms state-of-the-art multimoal KV compression methods, achieving up to 1.69 times faster decoding with 80% lower KV memory usage while maintaining task performance.
Problem

Research questions and friction points this paper is trying to address.

Compressing multimodal KV Cache to reduce memory usage and speed up inference
Addressing incompatibility of existing methods with efficient attention kernels like FlashAttention
Preserving critical outlier KV pairs that significantly impact model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses frequency-domain analysis for KV cache compression
Identifies and retains outlier KV pairs for performance
Dynamically allocates KV cache budget per layer
🔎 Similar Papers
No similar papers found.
Yaoxin Yang
Yaoxin Yang
Fudan University
Efficient Deep LearningMLLMModel Compression
P
Peng Ye
The Chinese University of Hong Kong
X
Xudong Tan
College of Future Information Technology, Fudan University
Chongjun Tu
Chongjun Tu
fudan university
neural architecture searchdataset pruningMLLM inference acceleration
M
Maosen Zhao
College of Future Information Technology, Fudan University
J
Jia Hao
College of Future Information Technology, Fudan University
T
Tao Chen
College of Future Information Technology, Fudan University, Shanghai Innovation Institute.