3DRealCar: An In-the-wild RGB-D Car Dataset with 360-degree Views

📅 2024-06-07
🏛️ arXiv.org
📈 Citations: 7
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D car datasets are predominantly synthetic or low-quality, limiting high-fidelity reconstruction and understanding in real-world scenarios. To address this, we introduce RealCar-3D—the first large-scale, real-world 3D car dataset—comprising 2,500 vehicles across diverse brands, each captured with ~200 high-resolution, 360° RGB-D frames and accurate point clouds. Our methodology features multi-view dense sampling, controlled acquisition under reflective, standard, and low-light conditions, background removal, and coordinate normalization—enabling, for the first time, high-volume, high-fidelity, and high-diversity 3D car data collection in realistic settings. The dataset delivers standardized, background-free, axis-aligned, fine-grained vehicle-part–segmented point clouds. It significantly improves 3D reconstruction quality under standard illumination and exposes critical performance bottlenecks of current methods under reflective and low-light conditions. RealCar-3D establishes a new benchmark for both 2D and 3D automotive perception tasks.

Technology Category

Application Category

📝 Abstract
3D cars are commonly used in self-driving systems, virtual/augmented reality, and games. However, existing 3D car datasets are either synthetic or low-quality, presenting a significant gap toward the high-quality real-world 3D car datasets and limiting their applications in practical scenarios. In this paper, we propose the first large-scale 3D real car dataset, termed 3DRealCar, offering three distinctive features. (1) extbf{High-Volume}: 2,500 cars are meticulously scanned by 3D scanners, obtaining car images and point clouds with real-world dimensions; (2) extbf{High-Quality}: Each car is captured in an average of 200 dense, high-resolution 360-degree RGB-D views, enabling high-fidelity 3D reconstruction; (3) extbf{High-Diversity}: The dataset contains various cars from over 100 brands, collected under three distinct lighting conditions, including reflective, standard, and dark. Additionally, we offer detailed car parsing maps for each instance to promote research in car parsing tasks. Moreover, we remove background point clouds and standardize the car orientation to a unified axis for the reconstruction only on cars without background and controllable rendering. We benchmark 3D reconstruction results with state-of-the-art methods across each lighting condition in 3DRealCar. Extensive experiments demonstrate that the standard lighting condition part of 3DRealCar can be used to produce a large number of high-quality 3D cars, improving various 2D and 3D tasks related to cars. Notably, our dataset brings insight into the fact that recent 3D reconstruction methods face challenges in reconstructing high-quality 3D cars under reflective and dark lighting conditions. extcolor{red}{href{https://xiaobiaodu.github.io/3drealcar/}{Our dataset is available here.}}
Problem

Research questions and friction points this paper is trying to address.

Lack of high-quality real-world 3D car datasets
Challenges in 3D reconstruction under diverse lighting conditions
Need for diverse and detailed car data for various applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale smartphone-scanned 3D car dataset
High-resolution 360-degree RGB-D views
Diverse lighting conditions and car brands
🔎 Similar Papers
No similar papers found.
X
Xiaobiao Du
The University of Queensland
H
Haiyang Sun
Li Auto
Shuyun Wang
Shuyun Wang
The University of Queensland
Generative modelLow-level Vision
Zhuojie Wu
Zhuojie Wu
The University of Queensland
H
Hongwei Sheng
The University of Queensland
J
Jiaying Ying
The University of Queensland
M
Ming Lu
Peking University
Tianqing Zhu
Tianqing Zhu
City University of Macau
PrivacyCyber SecurityMachine LearningAI Security
K
Kun Zhan
Li Auto
X
Xin Yu
The University of Queensland