🤖 AI Summary
This work addresses the critical gap in causal reasoning capabilities—specifically causal structure identification, intervention prediction, and counterfactual prediction—of Large Vision-Language Models (LVLMs). We introduce the first multimodal causal reasoning benchmark tailored for LVLMs and establish the first standardized evaluation protocol. Methodologically, we propose a unified three-tier assessment framework covering structural, interventional, and counterfactual reasoning; design a context-learning protocol grounded in causal representation learning datasets; conduct zero-shot and few-shot cross-task evaluations on leading open-source LVLMs; and integrate causal graph modeling with vision-language alignment analysis. Experimental results reveal severe limitations: current LVLMs achieve <35% average accuracy on counterfactual reasoning and rely excessively on superficial statistical correlations rather than mechanistic causal modeling for structure identification. Our findings provide essential empirical evidence and concrete directions for developing causally enhanced LVLM architectures.
📝 Abstract
Large language models (LLMs) have shown remarkable ability in various language tasks, especially with their emergent in-context learning capability. Extending LLMs to incorporate visual inputs, large vision-language models (LVLMs) have shown impressive performance in tasks such as recognition and visual question answering (VQA). Despite increasing interest in the utility of LLMs in causal reasoning tasks such as causal discovery and counterfactual reasoning, there has been relatively little work showcasing the abilities of LVLMs on visual causal reasoning tasks. We take this opportunity to formally introduce a comprehensive causal reasoning benchmark for multi-modal in-context learning from LVLMs. Our CausalVLBench encompasses three representative tasks: causal structure inference, intervention target prediction, and counterfactual prediction. We evaluate the ability of state-of-the-art open-source LVLMs on our causal reasoning tasks across three causal representation learning datasets and demonstrate their fundamental strengths and weaknesses. We hope that our benchmark elucidates the drawbacks of existing vision-language models and motivates new directions and paradigms in improving the visual causal reasoning abilities of LVLMs.