InfiniCube: Unbounded and Controllable Dynamic 3D Driving Scene Generation with World-Guided Video Models

📅 2024-12-05
🏛️ arXiv.org
📈 Citations: 9
Influential: 0
📄 PDF
🤖 AI Summary
Existing dynamic 3D driving scene generation methods suffer from scale limitations and spatiotemporal inconsistency. To address these challenges, we propose a map-guided sparse voxel generation framework that introduces, for the first time, a pixel-level guided buffering mechanism aligning video diffusion models with voxelized world representations. Our method employs a voxel-pixel dual-branch feedforward network, enabling controllable 3D Gaussian scene synthesis jointly conditioned on HD maps, vehicle bounding boxes, and textual descriptions. The framework achieves kilometer-scale scene modeling—significantly surpassing state-of-the-art approaches in geometric and appearance consistency, temporal coherence, and multi-scale controllability. It supports high-fidelity, arbitrarily extensible, and physically editable dynamic driving worlds, thereby advancing scalable and realistic simulation for autonomous driving research.

Technology Category

Application Category

📝 Abstract
We present InfiniCube, a scalable method for generating unbounded dynamic 3D driving scenes with high fidelity and controllability. Previous methods for scene generation either suffer from limited scales or lack geometric and appearance consistency along generated sequences. In contrast, we leverage the recent advancements in scalable 3D representation and video models to achieve large dynamic scene generation that allows flexible controls through HD maps, vehicle bounding boxes, and text descriptions. First, we construct a map-conditioned sparse-voxel-based 3D generative model to unleash its power for unbounded voxel world generation. Then, we re-purpose a video model and ground it on the voxel world through a set of carefully designed pixel-aligned guidance buffers, synthesizing a consistent appearance. Finally, we propose a fast feed-forward approach that employs both voxel and pixel branches to lift the dynamic videos to dynamic 3D Gaussians with controllable objects. Our method can generate controllable and realistic 3D driving scenes, and extensive experiments validate the effectiveness and superiority of our model.
Problem

Research questions and friction points this paper is trying to address.

Generating unbounded dynamic 3D driving scenes with high fidelity
Ensuring geometric and appearance consistency in generated sequences
Providing flexible controls via HD maps, bounding boxes, and text
Innovation

Methods, ideas, or system contributions that make the work stand out.

Map-conditioned sparse-voxel 3D generative model
Video model grounded via pixel-aligned guidance buffers
Fast feed-forward approach for dynamic 3D Gaussians