๐ค AI Summary
Vision-language models (VLMs) exhibit limited capability in dynamic physical reasoning and struggle to generalize vision-language knowledge to physics prediction. To address this, we propose a dual-path enhancement framework: (1) lightweight supervised fine-tuning using high-quality question-answer pairs generated from physics simulations; and (2) the Physics Context Builder (PCB), a novel module that explicitly encodes physical attributes and processes as plug-and-play, modular contextual prompts injected into both VLMs and LLMs. We introduce Falling Tower, a new benchmark that systematically evaluates Sim2Real transfer robustness for physical reasoningโfirst of its kind. Experiments show that fine-tuned compact VLMs significantly outperform large-scale SOTA models; PCB boosts LLM accuracy substantially on both CLEVRER and Falling Tower; and the framework maintains strong generalization in real-world scenarios.
๐ Abstract
Physical reasoning, which involves the interpretation, understanding, and prediction of object behavior in dynamic environments, remains a significant challenge for current Vision-Language Models (VLMs). In this work, we propose two methods to enhance VLMs' physical reasoning capabilities using simulated data. First, we fine-tune a pre-trained VLM using question-answer (QA) pairs generated from simulations relevant to physical reasoning tasks. Second, we introduce Physics Context Builders (PCBs), specialized VLMs fine-tuned to create scene descriptions enriched with physical properties and processes. During physical reasoning tasks, these PCBs can be leveraged as context to assist a Large Language Model (LLM) to improve its performance. We evaluate both of our approaches using multiple benchmarks, including a new stability detection QA dataset called Falling Tower, which includes both simulated and real-world scenes, and CLEVRER. We demonstrate that a small QA fine-tuned VLM can significantly outperform larger state-of-the-art foundational models. We also show that integrating PCBs boosts the performance of foundational LLMs on physical reasoning tasks. Using the real-world scenes from the Falling Tower dataset, we also validate the robustness of both approaches in Sim2Real transfer. Our results highlight the utility that simulated data can have in the creation of learning systems capable of advanced physical reasoning.