🤖 AI Summary
This work addresses the challenge of jointly ensuring spatiotemporal consistency, geometric fidelity, and view consistency in 4D content generation for large-scale multi-object interaction scenes. We propose the first two-stage framework integrating video diffusion priors with neural 4D reconstruction. Methodologically, we pioneer the co-optimization of a pose-conditioned video diffusion model and geometry-aware NeRF reconstruction: Stage I estimates camera trajectories, while Stage II performs controllable temporal generation conditioned on estimated poses, enabled by differentiable rendering for end-to-end joint training. Few-shot learning is incorporated to enhance generalization. Experiments demonstrate significant improvements over state-of-the-art methods in multi-view consistency and dynamic geometry accuracy, achieving new SOTA performance on mPSNR and mSSIM. Our approach enables high-fidelity, physically plausible, and temporally persistent 4D modeling.
📝 Abstract
The synthesis of spatiotemporally coherent 4D content presents fundamental challenges in computer vision, requiring simultaneous modeling of high-fidelity spatial representations and physically plausible temporal dynamics. Current approaches often struggle to maintain view consistency while handling complex scene dynamics, particularly in large-scale environments with multiple interacting elements. This work introduces Dream4D, a novel framework that bridges this gap through a synergy of controllable video generation and neural 4D reconstruction. Our approach seamlessly combines a two-stage architecture: it first predicts optimal camera trajectories from a single image using few-shot learning, then generates geometrically consistent multi-view sequences via a specialized pose-conditioned diffusion process, which are finally converted into a persistent 4D representation. This framework is the first to leverage both rich temporal priors from video diffusion models and geometric awareness of the reconstruction models, which significantly facilitates 4D generation and shows higher quality (e.g., mPSNR, mSSIM) over existing methods.