🤖 AI Summary
Existing world models predominantly rely on fine-tuned 2D video diffusion models, suffering from weak representational capacity and slow inference—hindering effective part-level dynamic modeling.
Method: We propose the first 4D reconstruction framework explicitly designed for part-level dynamics, jointly learning static geometry, appearance, and part-wise motion from multi-view images. Our approach introduces a novel multi-scale drag embedding module and a two-stage decoupled training strategy to mitigate data scarcity and catastrophic forgetting in 4D learning. Built upon 3D Gaussian splatting, it integrates multi-view geometric constraints, cross-state reconstruction loss, and stage-wise optimization of dynamics and appearance.
Contribution/Results: We introduce PartDrag-4D, a large-scale dataset with over 20,000 part states. Our method achieves state-of-the-art performance on part-level motion prediction, significantly enhancing robotic dynamic reasoning. Code, data, and models are fully open-sourced.
📝 Abstract
As interest grows in world models that predict future states from current observations and actions, accurately modeling part-level dynamics has become increasingly relevant for various applications. Existing approaches, such as Puppet-Master, rely on fine-tuning large-scale pre-trained video diffusion models, which are impractical for real-world use due to the limitations of 2D video representation and slow processing times. To overcome these challenges, we present PartRM, a novel 4D reconstruction framework that simultaneously models appearance, geometry, and part-level motion from multi-view images of a static object. PartRM builds upon large 3D Gaussian reconstruction models, leveraging their extensive knowledge of appearance and geometry in static objects. To address data scarcity in 4D, we introduce the PartDrag-4D dataset, providing multi-view observations of part-level dynamics across over 20,000 states. We enhance the model's understanding of interaction conditions with a multi-scale drag embedding module that captures dynamics at varying granularities. To prevent catastrophic forgetting during fine-tuning, we implement a two-stage training process that focuses sequentially on motion and appearance learning. Experimental results show that PartRM establishes a new state-of-the-art in part-level motion learning and can be applied in manipulation tasks in robotics. Our code, data, and models are publicly available to facilitate future research.