Label-Efficient LiDAR Panoptic Segmentation

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-quality annotated data for LiDAR panoptic segmentation is extremely scarce, severely limiting the generalization capability of existing methods. Method: We propose L3PS, a pseudo-label augmentation framework designed for low annotation cost. First, we train an efficient 2D panoptic segmentation network on a small set of 2D image annotations to generate reliable 2D pseudo-labels, which are then robustly projected onto 3D point clouds. Next, we introduce a geometry-aware 3D refinement module that jointly leverages point cloud clustering, multi-frame temporal accumulation, and ground-point separation to significantly enhance pseudo-label quality. Contribution/Results: This work pioneers the transfer of label-efficient 2D panoptic segmentation paradigms to the LiDAR 3D domain and establishes the first geometry-driven pseudo-label refinement framework specifically tailored for point clouds. On the nuScenes benchmark, our approach improves pseudo-label quality by +10.6 PQ and +7.9 mIoU, enabling training of mainstream LiDAR segmentation models with minimal human annotation—substantially reducing labeling costs.

Technology Category

Application Category

📝 Abstract
A main bottleneck of learning-based robotic scene understanding methods is the heavy reliance on extensive annotated training data, which often limits their generalization ability. In LiDAR panoptic segmentation, this challenge becomes even more pronounced due to the need to simultaneously address both semantic and instance segmentation from complex, high-dimensional point cloud data. In this work, we address the challenge of LiDAR panoptic segmentation with very few labeled samples by leveraging recent advances in label-efficient vision panoptic segmentation. To this end, we propose a novel method, Limited-Label LiDAR Panoptic Segmentation (L3PS), which requires only a minimal amount of labeled data. Our approach first utilizes a label-efficient 2D network to generate panoptic pseudo-labels from a small set of annotated images, which are subsequently projected onto point clouds. We then introduce a novel 3D refinement module that capitalizes on the geometric properties of point clouds. By incorporating clustering techniques, sequential scan accumulation, and ground point separation, this module significantly enhances the accuracy of the pseudo-labels, improving segmentation quality by up to +10.6 PQ and +7.9 mIoU. We demonstrate that these refined pseudo-labels can be used to effectively train off-the-shelf LiDAR segmentation networks. Through extensive experiments, we show that L3PS not only outperforms existing methods but also substantially reduces the annotation burden. We release the code of our work at https://l3ps.cs.uni-freiburg.de.
Problem

Research questions and friction points this paper is trying to address.

Reduces reliance on extensive annotated LiDAR data
Improves LiDAR panoptic segmentation with minimal labels
Enhances pseudo-label accuracy using 3D refinement techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages 2D network for pseudo-labels
Introduces 3D refinement with clustering
Reduces annotation burden significantly
🔎 Similar Papers
No similar papers found.