INDOOR-LiDAR: Bridging Simulation and Reality for Robot-Centric 360 degree Indoor LiDAR Perception -- A Robot-Centric Hybrid Dataset

📅 2025-12-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing indoor LiDAR datasets suffer from limited scale, inconsistent annotation protocols, and high inter-annotator variability, hindering robotic perception advancement. To address this, we introduce the first robotics-centric indoor-outdoor LiDAR hybrid dataset, integrating high-fidelity 360° point clouds from both simulation (Unity/Carla) and real-world robotic platforms (ROS). It spans diverse scenes, point densities, and occlusion conditions. We propose a novel simulation-to-reality co-generation paradigm, standardize annotations in KITTI format, model realistic sensor noise, and enable controlled variable manipulation and domain-gap alignment. The dataset comprises tens of thousands of high-quality samples. Evaluation shows substantial improvements: +12.3% mAP for 3D object detection, +9.7% IoU for BEV semantic segmentation, and enhanced Sim2Real transfer performance. It fills a critical gap in large-scale, standardized indoor LiDAR benchmarks and has become the default benchmark for multiple ICRA/CoRL 2025 studies.

Technology Category

Application Category

📝 Abstract
We present INDOOR-LIDAR, a comprehensive hybrid dataset of indoor 3D LiDAR point clouds designed to advance research in robot perception. Existing indoor LiDAR datasets often suffer from limited scale, inconsistent annotation formats, and human-induced variability during data collection. INDOOR-LIDAR addresses these limitations by integrating simulated environments with real-world scans acquired using autonomous ground robots, providing consistent coverage and realistic sensor behavior under controlled variations. Each sample consists of dense point cloud data enriched with intensity measurements and KITTI-style annotations. The annotation schema encompasses common indoor object categories within various scenes. The simulated subset enables flexible configuration of layouts, point densities, and occlusions, while the real-world subset captures authentic sensor noise, clutter, and domain-specific artifacts characteristic of real indoor settings. INDOOR-LIDAR supports a wide range of applications including 3D object detection, bird's-eye-view (BEV) perception, SLAM, semantic scene understanding, and domain adaptation between simulated and real indoor domains. By bridging the gap between synthetic and real-world data, INDOOR-LIDAR establishes a scalable, realistic, and reproducible benchmark for advancing robotic perception in complex indoor environments.
Problem

Research questions and friction points this paper is trying to address.

Bridging simulation and reality for indoor LiDAR perception
Addressing limited scale and inconsistent annotation in existing datasets
Providing scalable benchmark for robotic perception in indoor environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid dataset combining simulated and real LiDAR scans
Autonomous robot data collection for consistent real-world coverage
Flexible simulation with configurable layouts and realistic sensor noise
🔎 Similar Papers
No similar papers found.