๐ค AI Summary
Current multimodal foundation models (MFMs) lack critical capabilities required for world modelingโnamely, counterfactual reasoning, dynamic process simulation, spatiotemporal understanding, and controllable visual generation. To address this, we propose a controllable 4D generative framework integrating causal inference, counterfactual thinking, and structured spatiotemporal reasoning. Our approach leverages scene graph modeling, multimodal conditional control, and a discriminative-generative co-architectural design to achieve fine-grained semantic alignment and user-intent-driven image/video generation. Key contributions include: (1) the first prototype of a multimodal world model supporting editability, interactivity, and deformability; and (2) substantial improvements in dynamic scene comprehension and high-level semantic-consistent generation, achieving state-of-the-art performance on counterfactual reasoning and spatiotemporally controllable generation tasks.
๐ Abstract
Humans understand the world through the integration of multiple sensory modalities, enabling them to perceive, reason about, and imagine dynamic physical processes. Inspired by this capability, multimodal foundation models (MFMs) have emerged as powerful tools for multimodal understanding and generation. However, today's MFMs fall short of serving as effective world models. They lack the essential ability such as perform counterfactual reasoning, simulate dynamics, understand the spatiotemporal information, control generated visual outcomes, and perform multifaceted reasoning. We investigates what it takes to bridge the gap between multimodal foundation models and world models. We begin by improving the reasoning capabilities of MFMs through discriminative tasks and equipping MFMs with structured reasoning skills, such as causal inference, counterfactual thinking, and spatiotemporal reasoning, enabling them to go beyond surface correlations and understand deeper relationships within visual and textual data. Next, we explore generative capabilities of multimodal foundation models across both image and video modalities, introducing new frameworks for structured and controllable generation. Our approaches incorporate scene graphs, multimodal conditioning, and multimodal alignment strategies to guide the generation process, ensuring consistency with high-level semantics and fine-grained user intent. We further extend these techniques to controllable 4D generation, enabling interactive, editable, and morphable object synthesis over time and space.