Faster Training, Fewer Labels: Self-Supervised Pretraining for Fine-Grained BEV Segmentation

๐Ÿ“… 2026-02-20
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of scaling BEV semantic segmentation for autonomous driving, which is hindered by the reliance on costly and inconsistent multi-camera ground-truth annotations. The authors propose a two-stage self-supervised pretraining approach: first, leveraging image-plane pseudo-labels generated by Mask2Former together with differentiable reprojection to learn cross-view consistent BEV features, augmented by a temporal consistency loss; second, fine-tuning the model using only 50% of the annotated data. This study presents the first integration of differentiable reprojection and image-view pseudo-labels for BEV self-supervised pretraining, substantially reducing annotation dependency while enhancing model transferability. On the nuScenes benchmark, the method achieves a 2.5-point mIoU improvement with half the labeled data and reduces total training time by two-thirds.

Technology Category

Application Category

๐Ÿ“ Abstract
Dense Bird's Eye View (BEV) semantic maps are central to autonomous driving, yet current multi-camera methods depend on costly, inconsistently annotated BEV ground truth. We address this limitation with a two-phase training strategy for fine-grained road marking segmentation that removes full supervision during pretraining and halves the amount of training data during fine-tuning while still outperforming the comparable supervised baseline model. During the self-supervised pretraining, BEVFormer predictions are differentiably reprojected into the image plane and trained against multi-view semantic pseudo-labels generated by the widely used semantic segmentation model Mask2Former. A temporal loss encourages consistency across frames. The subsequent supervised fine-tuning phase requires only 50% of the dataset and significantly less training time. With our method, the fine-tuning benefits from rich priors learned during pretraining boosting the performance and BEV segmentation quality (up to +2.5pp mIoU over the fully supervised baseline) on nuScenes. It simultaneously halves the usage of annotation data and reduces total training time by up to two thirds. The results demonstrate that differentiable reprojection plus camera perspective pseudo labels yields transferable BEV features and a scalable path toward reduced-label autonomous perception.
Problem

Research questions and friction points this paper is trying to address.

BEV segmentation
self-supervised pretraining
annotation efficiency
autonomous driving
fine-grained road marking
Innovation

Methods, ideas, or system contributions that make the work stand out.

self-supervised pretraining
BEV segmentation
differentiable reprojection
pseudo-labeling
annotation-efficient learning
๐Ÿ”Ž Similar Papers
No similar papers found.
D
Daniel Busch
University of Wuppertal, APTIV
C
Christian Bohn
University of Wuppertal
T
Thomas Kurbiel
APTIV
K
Klaus Friedrichs
APTIV
R
Richard Meyes
University of Wuppertal
Tobias Meisen
Tobias Meisen
Bergische Universitรคt Wuppertal, previously RWTH Aachen University
Industrial AIDeep LearningDeep Reinforcement LearningSemantic TechnologiesKnowledge Graph