FlowMM: Cross-Modal Information Flow Guided KV Cache Merging for Efficient Multimodal Context Inference

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address inefficient KV caching and context loss/hallucination caused by conventional attention-score-based eviction/merging strategies in multimodal large language models (MLLMs), this paper proposes FlowMM. The method introduces two key innovations: (i) the first cross-modal information flow modeling framework to guide dynamic inter-layer KV merging, and (ii) a sensitivity-adaptive token matching mechanism that jointly evaluates semantic similarity and task-criticality to balance modality specificity and contextual integrity. By explicitly mitigating interference from modality distribution skew and attention bias during cache compression, FlowMM achieves substantial efficiency gains. Experiments across mainstream MLLMs demonstrate 80–95% reduction in KV cache memory footprint, 1.3–1.8× lower decoding latency, and preservation of state-of-the-art task performance.

Technology Category

Application Category

📝 Abstract
Traditional KV cache eviction strategies, which discard less critical KV-pairs based on attention scores, often degrade generation quality, causing context loss or hallucinations. Recent efforts shift toward KV merging, merging eviction tokens with retention tokens based on similarity. However, in multimodal scenarios, distributional biases across modality tokens and attentional biases in cross-modal interactions limit its effectiveness. This work introduces FlowMM, an adaptive framework for cross-modal information flow-guided multimodal KV cache merging. FlowMM leverages cross-modal information flow to dynamically apply layer-specific merging strategies, capturing modality-specific patterns while preserving contextual integrity. Furthermore, we introduce a sensitivity-adaptive token matching mechanism that jointly evaluates token similarity and task-critical sensitivity, merging low-risk tokens while safeguarding high-sensitivity ones. Extensive experiments across diverse leading MLLMs show that FlowMM reduces KV cache memory by 80% to 95% and decoding latency by 1.3-1.8x, while maintaining competitive task performance.
Problem

Research questions and friction points this paper is trying to address.

Traditional KV cache eviction degrades multimodal generation quality
Distributional biases limit KV merging effectiveness across modalities
Current methods lack adaptive strategies for cross-modal token interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-modal information flow guides KV cache merging
Layer-specific merging strategies preserve contextual integrity
Sensitivity-adaptive token matching evaluates similarity and importance
🔎 Similar Papers
No similar papers found.
K
Kunxi Li
Zhejiang University
Y
Yufan Xiong
Huazhong Agricultural University
Zhonghua Jiang
Zhonghua Jiang
Zhejiang University
Multimodal LLMEfficient AI3D GenerationFederated Learning
Yiyun Zhou
Yiyun Zhou
Zhejiang University
Data MiningMultimodal LearningLarge Language Model
Zhaode Wang
Zhaode Wang
Alibaba
C
Chengfei Lv
Alibaba
S
Shengyu Zhang
Zhejiang University