UniGaussian: Driving Scene Reconstruction from Multiple Camera Models via Unified Gaussian Representations

📅 2024-11-22
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address the incompatibility between fisheye camera modeling and 3D Gaussian representations in autonomous driving simulation, this paper proposes the first unified 3D Gaussian framework supporting joint reconstruction from multiple camera models (pinhole and fisheye). Methodologically, we introduce (1) a differentiable fisheye-adaptive rendering technique that enables geometrically consistent Gaussian splatting under fisheye distortion via affine warping, and (2) a multimodal joint supervision framework integrating depth, semantics, surface normals, and LiDAR observations, regularized by cross-sensor consistency constraints. Evaluated on real-world urban scene data, our method significantly improves reconstruction fidelity and generalization for fisheye views, while enabling real-time rendering and end-to-end simulation deployment. This work establishes a novel paradigm for neural representation-based autonomous driving simulation.

Technology Category

Application Category

📝 Abstract
Urban scene reconstruction is crucial for real-world autonomous driving simulators. Although existing methods have achieved photorealistic reconstruction, they mostly focus on pinhole cameras and neglect fisheye cameras. In fact, how to effectively simulate fisheye cameras in driving scene remains an unsolved problem. In this work, we propose UniGaussian, a novel approach that learns a unified 3D Gaussian representation from multiple camera models for urban scene reconstruction in autonomous driving. Our contributions are two-fold. First, we propose a new differentiable rendering method that distorts 3D Gaussians using a series of affine transformations tailored to fisheye camera models. This addresses the compatibility issue of 3D Gaussian splatting with fisheye cameras, which is hindered by light ray distortion caused by lenses or mirrors. Besides, our method maintains real-time rendering while ensuring differentiability. Second, built on the differentiable rendering method, we design a new framework that learns a unified Gaussian representation from multiple camera models. By applying affine transformations to adapt different camera models and regularizing the shared Gaussians with supervision from different modalities, our framework learns a unified 3D Gaussian representation with input data from multiple sources and achieves holistic driving scene understanding. As a result, our approach models multiple sensors (pinhole and fisheye cameras) and modalities (depth, semantic, normal and LiDAR point clouds). Our experiments show that our method achieves superior rendering quality and fast rendering speed for driving scene simulation.
Problem

Research questions and friction points this paper is trying to address.

Reconstruct urban scenes for autonomous driving simulators.
Address fisheye camera compatibility in 3D Gaussian splatting.
Unify 3D Gaussian representations from multiple camera models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified 3D Gaussian representation for multiple cameras
Differentiable rendering with fisheye camera affine transformations
Real-time rendering with multi-modal data integration
🔎 Similar Papers
No similar papers found.
Y
Yuan Ren
Huawei Noah’s Ark Lab
Guile Wu
Guile Wu
Huawei Technologies Canada Co., Ltd.
Deep Learning3D ReconstructionGenerative AIAutonomous DrivingVisual Recognition
R
Runhao Li
Huawei Noah’s Ark Lab, University of Toronto
Z
Zheyuan Yang
Huawei Noah’s Ark Lab, University of Toronto
Y
Yibo Liu
Huawei Noah’s Ark Lab, York University
X
Xingxin Chen
Huawei Noah’s Ark Lab
Tongtong Cao
Tongtong Cao
Researcher, Huawei Noah's Ark Lab
RoboticsEmbodied AIAutonomous driving
Bingbing Liu
Bingbing Liu
Researcher, Huawei
Autonomous DrivingRoboticsNeural RenderingVision Foundation Model