ViewMorpher3D: A 3D-aware Diffusion Framework for Multi-Camera Novel View Synthesis in Autonomous Driving

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of artifacts in multi-view rendering for autonomous driving simulation, which often arise from view extrapolation or sparse observations and compromise the reliability of perception and planning algorithms. The authors propose a 3D-aware diffusion-based framework for multi-view image enhancement, introducing diffusion models to novel view synthesis across multiple cameras for the first time. The method jointly leverages camera poses, 3D geometric priors, and neighboring or overlapping reference views, supporting variable numbers of cameras and flexible view configurations. By integrating Gaussian splatting rendering with cross-view consistency optimization, the approach significantly improves image quality and geometric fidelity on real-world driving datasets, effectively suppressing rendering artifacts while preserving structural consistency.

Technology Category

Application Category

📝 Abstract
Autonomous driving systems rely heavily on multi-view images to ensure accurate perception and robust decision-making. To effectively develop and evaluate perception stacks and planning algorithms, realistic closed-loop simulators are indispensable. While 3D reconstruction techniques such as Gaussian Splatting offer promising avenues for simulator construction, the rendered novel views often exhibit artifacts, particularly in extrapolated perspectives or when available observations are sparse. We introduce ViewMorpher3D, a multi-view image enhancement framework based on image diffusion models, designed to elevate photorealism and multi-view coherence in driving scenes. Unlike single-view approaches, ViewMorpher3D jointly processes a set of rendered views conditioned on camera poses, 3D geometric priors, and temporally adjacent or spatially overlapping reference views. This enables the model to infer missing details, suppress rendering artifacts, and enforce cross-view consistency. Our framework accommodates variable numbers of cameras and flexible reference/target view configurations, making it adaptable to diverse sensor setups. Experiments on real-world driving datasets demonstrate substantial improvements in image quality metrics, effectively reducing artifacts while preserving geometric fidelity.
Problem

Research questions and friction points this paper is trying to address.

novel view synthesis
rendering artifacts
multi-view consistency
autonomous driving simulation
3D reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D-aware diffusion
multi-camera novel view synthesis
cross-view consistency
Gaussian Splatting enhancement
autonomous driving simulation
🔎 Similar Papers
No similar papers found.