๐ค AI Summary
This work investigates the robustness of multimodal large language models (MLLMs) in cross-modal causal reasoning when visual details implicitly encode causal cuesโa capability inadequately assessed by existing benchmarks. To address this gap, we introduce MuCR, the first dedicated benchmark for multimodal causal reasoning, constructed from synthetic twin imageโtext pairs and spanning three granularities: image-level matching, phrase-level understanding, and sentence-level explanation. We further propose Visual-enhanced Chain-of-Thought (VcCoT), a prompting method that explicitly guides MLLMs to attend to critical visual causal cues. Experiments reveal that current MLLMs underperform significantly in multimodal causal reasoning compared to text-only settings; accurate visual causal cue identification constitutes the primary bottleneck for cross-modal generalization; and VcCoT boosts average accuracy across mainstream MLLMs by 12.7%. This work establishes a novel benchmark, evaluation paradigm, and reasoning mechanism for multimodal causal inference.
๐ Abstract
Multimodal Large Language Models (MLLMs) have showcased exceptional Chain-of-Thought (CoT) reasoning ability in complex textual inference tasks including causal reasoning. However, will these causalities remain straightforward when crucial hints hide in visual details? If not, what factors might influence cross-modal generalization? Whether we can effectively enhance their capacity for robust causal inference across both text and vision? Motivated by these, we introduce MuCR - a novel Multimodal Causal Reasoning benchmark that leverages synthetic siamese images and text pairs to challenge MLLMs. Additionally, we develop tailored metrics from multiple perspectives, including image-level match, phrase-level understanding, and sentence-level explanation, to comprehensively assess MLLMs' comprehension abilities. Our experiments reveal that current MLLMs fall short in multimodal causal reasoning compared to their performance in purely textual settings. Additionally, we find that identifying visual cues across images is key to effective cross-modal generalization. Finally, we propose a VcCoT strategy that better highlights visual cues, and our results confirm its efficacy in enhancing multimodal causal reasoning. The project is available at: https://github.com/Zhiyuan-Li-John/MuCR