🤖 AI Summary
To address inefficient KV caching and context loss/hallucination caused by conventional attention-score-based eviction/merging strategies in multimodal large language models (MLLMs), this paper proposes FlowMM. The method introduces two key innovations: (i) the first cross-modal information flow modeling framework to guide dynamic inter-layer KV merging, and (ii) a sensitivity-adaptive token matching mechanism that jointly evaluates semantic similarity and task-criticality to balance modality specificity and contextual integrity. By explicitly mitigating interference from modality distribution skew and attention bias during cache compression, FlowMM achieves substantial efficiency gains. Experiments across mainstream MLLMs demonstrate 80–95% reduction in KV cache memory footprint, 1.3–1.8× lower decoding latency, and preservation of state-of-the-art task performance.
📝 Abstract
Traditional KV cache eviction strategies, which discard less critical KV-pairs based on attention scores, often degrade generation quality, causing context loss or hallucinations. Recent efforts shift toward KV merging, merging eviction tokens with retention tokens based on similarity. However, in multimodal scenarios, distributional biases across modality tokens and attentional biases in cross-modal interactions limit its effectiveness. This work introduces FlowMM, an adaptive framework for cross-modal information flow-guided multimodal KV cache merging. FlowMM leverages cross-modal information flow to dynamically apply layer-specific merging strategies, capturing modality-specific patterns while preserving contextual integrity. Furthermore, we introduce a sensitivity-adaptive token matching mechanism that jointly evaluates token similarity and task-critical sensitivity, merging low-risk tokens while safeguarding high-sensitivity ones. Extensive experiments across diverse leading MLLMs show that FlowMM reduces KV cache memory by 80% to 95% and decoding latency by 1.3-1.8x, while maintaining competitive task performance.