🤖 AI Summary
This work addresses the challenge of robust relocalization for autonomous vehicles in GNSS-denied or signal-degraded environments by proposing a novel multimodal relocalization framework that operates without any GNSS priors. The method fuses LiDAR and camera data to generate bird’s-eye-view (BEV) semantic segmentation and, for the first time, integrates multimodal BEV representations with neural map matching through a context-aware cross-modal fusion strategy and a cross-attention-driven map retrieval mechanism. Experimental results demonstrate that the proposed approach achieves a Recall@1m of 39.8% under GNSS-denied conditions and adverse weather, approximately doubling the performance of the current state-of-the-art baseline, thereby significantly enhancing both localization robustness and accuracy.
📝 Abstract
Localization in GNSS-denied and GNSS-degraded environments is a challenge for the safe widespread deployment of autonomous vehicles. Such GNSS-challenged environments require alternative methods for robust localization. In this work, we propose BEVMapMatch, a framework for robust vehicle re-localization on a known map without the need for GNSS priors. BEVMapMatch uses a context-aware lidar+camera fusion method to generate multimodal Bird's Eye View (BEV) segmentations around the ego vehicle in both good and adverse weather conditions. Leveraging a search mechanism based on cross-attention, the generated BEV segmentation maps are then used for the retrieval of candidate map patches for map-matching purposes. Finally, BEVMapMatch uses the top retrieved candidate for finer alignment against the generated BEV segmentation, achieving accurate global localization without the need for GNSS. Multiple frames of generated BEV segmentation further improve localization accuracy. Extensive evaluations show that BEVMapMatch outperforms existing methods for re-localization in GNSS-denied and adverse environments, with a Recall@1m of 39.8%, being nearly twice as much as the best performing re-localization baseline. Our code and data will be made available at https://github.com/ssuralcmu/BEVMapMatch.git.