Synthetic Vision: Training Vision-Language Models to Understand Physics

๐Ÿ“… 2024-12-11
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 2
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Vision-language models (VLMs) exhibit limited capability in dynamic physical reasoning and struggle to generalize vision-language knowledge to physics prediction. To address this, we propose a dual-path enhancement framework: (1) lightweight supervised fine-tuning using high-quality question-answer pairs generated from physics simulations; and (2) the Physics Context Builder (PCB), a novel module that explicitly encodes physical attributes and processes as plug-and-play, modular contextual prompts injected into both VLMs and LLMs. We introduce Falling Tower, a new benchmark that systematically evaluates Sim2Real transfer robustness for physical reasoningโ€”first of its kind. Experiments show that fine-tuned compact VLMs significantly outperform large-scale SOTA models; PCB boosts LLM accuracy substantially on both CLEVRER and Falling Tower; and the framework maintains strong generalization in real-world scenarios.

Technology Category

Application Category

๐Ÿ“ Abstract
Physical reasoning, which involves the interpretation, understanding, and prediction of object behavior in dynamic environments, remains a significant challenge for current Vision-Language Models (VLMs). In this work, we propose two methods to enhance VLMs' physical reasoning capabilities using simulated data. First, we fine-tune a pre-trained VLM using question-answer (QA) pairs generated from simulations relevant to physical reasoning tasks. Second, we introduce Physics Context Builders (PCBs), specialized VLMs fine-tuned to create scene descriptions enriched with physical properties and processes. During physical reasoning tasks, these PCBs can be leveraged as context to assist a Large Language Model (LLM) to improve its performance. We evaluate both of our approaches using multiple benchmarks, including a new stability detection QA dataset called Falling Tower, which includes both simulated and real-world scenes, and CLEVRER. We demonstrate that a small QA fine-tuned VLM can significantly outperform larger state-of-the-art foundational models. We also show that integrating PCBs boosts the performance of foundational LLMs on physical reasoning tasks. Using the real-world scenes from the Falling Tower dataset, we also validate the robustness of both approaches in Sim2Real transfer. Our results highlight the utility that simulated data can have in the creation of learning systems capable of advanced physical reasoning.
Problem

Research questions and friction points this paper is trying to address.

Enhance physical reasoning in Vision-Language Models (VLMs).
Develop modular frameworks for scalable physical reasoning training.
Improve Sim2Real transfer in physical reasoning tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular framework for physical reasoning enhancement
Specialized VLMs generate detailed physical contexts
Sim2Real transfer from simulated to real-world data
๐Ÿ”Ž Similar Papers
No similar papers found.