🤖 AI Summary
This work addresses the unclear stability of spatial representations in multimodal large language models (MLLMs) under counterfactual viewpoint transformations, despite their strong performance in single-view spatial reasoning. To investigate this, the authors introduce CVT-Bench, a controlled diagnostic benchmark that evaluates viewpoint consistency, 360° cycle consistency, and relational stability across 100 synthetic scenes and 6,000 relational queries using hypothetical camera orbit transformations without re-rendering. The study systematically reveals, for the first time, that MLLMs commonly violate cycle consistency and exhibit degraded relational stability under viewpoint changes. Furthermore, it demonstrates that structured input representations—such as scene graphs—significantly enhance the robustness of spatial reasoning, substantially outperforming raw image inputs or textual bounding box descriptions.
📝 Abstract
Multimodal large language models (MLLMs) achieve strong performance on single-view spatial reasoning tasks, yet it remains unclear whether they maintain stable spatial state representations under counterfactual viewpoint changes. We introduce a controlled diagnostic benchmark that evaluates relational consistency under hypothetical camera orbit transformations without re-rendering images. Across 100 synthetic scenes and 6,000 relational queries, we measure viewpoint consistency, 360° cycle agreement, and relational stability over sequential transformations. Despite high single-view accuracy, state-of-the-art MLLMs exhibit systematic degradation under counterfactual viewpoint changes, with frequent violations of cycle consistency and rapid decay in relational stability. We further evaluate multiple input representations, visual input, textual bounding boxes, and structured scene graphs, and show that increasing representational structure improves stability. Our results suggest that single-view spatial accuracy overestimates the robustness of induced spatial representations and that representation structure plays a critical role in counterfactual spatial reasoning.