CVT-Bench: Counterfactual Viewpoint Transformations Reveal Unstable Spatial Representations in Multimodal LLMs

📅 2026-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the unclear stability of spatial representations in multimodal large language models (MLLMs) under counterfactual viewpoint transformations, despite their strong performance in single-view spatial reasoning. To investigate this, the authors introduce CVT-Bench, a controlled diagnostic benchmark that evaluates viewpoint consistency, 360° cycle consistency, and relational stability across 100 synthetic scenes and 6,000 relational queries using hypothetical camera orbit transformations without re-rendering. The study systematically reveals, for the first time, that MLLMs commonly violate cycle consistency and exhibit degraded relational stability under viewpoint changes. Furthermore, it demonstrates that structured input representations—such as scene graphs—significantly enhance the robustness of spatial reasoning, substantially outperforming raw image inputs or textual bounding box descriptions.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) achieve strong performance on single-view spatial reasoning tasks, yet it remains unclear whether they maintain stable spatial state representations under counterfactual viewpoint changes. We introduce a controlled diagnostic benchmark that evaluates relational consistency under hypothetical camera orbit transformations without re-rendering images. Across 100 synthetic scenes and 6,000 relational queries, we measure viewpoint consistency, 360° cycle agreement, and relational stability over sequential transformations. Despite high single-view accuracy, state-of-the-art MLLMs exhibit systematic degradation under counterfactual viewpoint changes, with frequent violations of cycle consistency and rapid decay in relational stability. We further evaluate multiple input representations, visual input, textual bounding boxes, and structured scene graphs, and show that increasing representational structure improves stability. Our results suggest that single-view spatial accuracy overestimates the robustness of induced spatial representations and that representation structure plays a critical role in counterfactual spatial reasoning.
Problem

Research questions and friction points this paper is trying to address.

spatial reasoning
counterfactual viewpoint
multimodal LLMs
relational consistency
spatial representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

counterfactual viewpoint transformation
spatial representation stability
multimodal LLMs
relational consistency
structured scene representation
🔎 Similar Papers
No similar papers found.
S
Shanmukha Vellamcheti
CSSE Department, Auburn University, Auburn AL 36849, USA
U
Uday Kiran Kothapalli
CSSE Department, Auburn University, Auburn AL 36849, USA
D
Disharee Bhowmick
CSSE Department, Auburn University, Auburn AL 36849, USA
Sathyanarayanan N. Aakur
Sathyanarayanan N. Aakur
Assistant Professor, Auburn University
Event UnderstandingVisual CommonsenseMetagenome Analysis