DeepPHY: Benchmarking Agentic VLMs on Physical Reasoning

📅 2025-08-07
📈 Citations: 0
✹ Influential: 0
📄 PDF
đŸ€– AI Summary
Current vision-language models (VLMs) struggle to accurately comprehend physical laws, perform fine-grained spatial reasoning, and execute long-horizon action planning in complex, dynamic environments—severely limiting their practical deployment in embodied AI tasks; meanwhile, real-world evaluation remains prohibitively expensive. To address this, we propose DeepPHY: the first comprehensive benchmark framework explicitly designed for physics-aware reasoning. It integrates multi-level physics simulation environments—including rigid-body dynamics, gravity modeling, and causal prediction—and features progressively challenging tasks alongside fine-grained, quantitative metrics to enable end-to-end evaluation of VLMs’ perception–reasoning–planning capabilities. Experimental results reveal a fundamental gap in state-of-the-art models’ ability to translate physical knowledge into precise control policies, exposing a critical bottleneck in embodied intelligence development.

Technology Category

Application Category

📝 Abstract
Although Vision Language Models (VLMs) exhibit strong perceptual abilities and impressive visual reasoning, they struggle with attention to detail and precise action planning in complex, dynamic environments, leading to subpar performance. Real-world tasks typically require complex interactions, advanced spatial reasoning, long-term planning, and continuous strategy refinement, usually necessitating understanding the physics rules of the target scenario. However, evaluating these capabilities in real-world scenarios is often prohibitively expensive. To bridge this gap, we introduce DeepPHY, a novel benchmark framework designed to systematically evaluate VLMs' understanding and reasoning about fundamental physical principles through a series of challenging simulated environments. DeepPHY integrates multiple physical reasoning environments of varying difficulty levels and incorporates fine-grained evaluation metrics. Our evaluation finds that even state-of-the-art VLMs struggle to translate descriptive physical knowledge into precise, predictive control.
Problem

Research questions and friction points this paper is trying to address.

Evaluating VLMs' understanding of physical principles in simulations
Assessing VLMs' ability to translate knowledge into precise control
Bridging the gap in physical reasoning benchmarks for VLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark framework for VLMs' physical reasoning
Simulated environments with varying difficulty levels
Fine-grained evaluation metrics for precise control
🔎 Similar Papers
No similar papers found.
X
Xinrun Xu
Taobao & Tmall Group of Alibaba, Institute of Software, Chinese Academy of Science, University of Chinese Academy of Sciences
P
Pi Bu
Taobao & Tmall Group of Alibaba
Y
Ye Wang
Renmin University of China
Börje F. Karlsson
Börje F. Karlsson
Beijing Academy of Artificial Intelligence (BAAI)
Machine Learning SystemsIntelligent AgentsKnowledge MiningMobile ComputingMultilinguality
Z
Ziming Wang
Taobao & Tmall Group of Alibaba
T
Tengtao Song
Taobao & Tmall Group of Alibaba
Q
Qi Zhu
Taobao & Tmall Group of Alibaba
Jun Song
Jun Song
Shenzhen University
nanophotonics
Z
Zhiming Ding
Institute of Software, Chinese Academy of Science
B
Bo Zheng
Taobao & Tmall Group of Alibaba