๐ค AI Summary
This work addresses the redundancy in long chains of thought (Long CoT) within multimodal large language models, which impairs reasoning efficiency. Existing compression methods often disrupt critical visionโlanguage alignment and lack interpretability. To overcome these limitations, this study formulates CoT compression as a sequential decision-making process and employs reinforcement learning to optimize it, enabling the retention of essential reasoning steps while generating natural-language explanations. The proposed approach significantly shortens reasoning sequences across multiple multimodal reasoning benchmarks without compromising answer accuracy. Moreover, it provides high-quality, interpretable justifications for the compression decisions, thereby achieving efficient and transparent multimodal reasoning.
๐ Abstract
Long chains of thought (Long CoTs) are widely employed in multimodal reasoning models to tackle complex tasks by capturing detailed visual information. However, these Long CoTs are often excessively lengthy and contain redundant reasoning steps, which can hinder inference efficiency. Compressing these long CoTs is a natural solution, yet existing approaches face two major challenges: (1) they may compromise the integrity of visual-textual reasoning by removing essential alignment cues, and (2) the compression process lacks explainability, making it difficult to discern which information is critical. To address these problems, we propose XMCC, an eXplainable Multimodal CoT Compressor that formulates compression as a sequential decision-making process optimized via reinforcement learning. XMCC can effectively shorten reasoning trajectories while preserving key reasoning steps and answer correctness, and simultaneously generates natural-language explanations for its compression decisions. Extensive experiments on representative multimodal reasoning benchmarks demonstrate that XMCC not only reduces reasoning length but also provides explainable explanations, validating its effectiveness.