🤖 AI Summary
To address cross-modal misalignment between LiDAR and camera BEV features—caused by sensor calibration errors and inaccurate depth estimation—this paper proposes a robust alignment framework based on contrastive learning. The method introduces (1) dual modality-specific instance modeling modules—L-Instance for LiDAR and C-Instance for camera—to construct BEV feature instances tailored to each sensor’s characteristics; and (2) an InstanceFusion mechanism integrating contrastive alignment with graph matching, jointly optimizing local geometric consistency and global structural correspondence. Evaluated on the nuScenes validation set, the approach achieves 70.3% mAP, outperforming BEVFusion by 1.8%. Under calibrated and depth-noise perturbations, it demonstrates superior robustness, exceeding BEVFusion by 7.3% in performance degradation resistance. These results substantiate significant improvements in both accuracy and stability of multi-modal BEV fusion.
📝 Abstract
In the field of 3D object detection tasks, fusing heterogeneous features from LiDAR and camera sensors into a unified Bird's Eye View (BEV) representation is a widely adopted paradigm. However, existing methods are often compromised by imprecise sensor calibration, resulting in feature misalignment in LiDAR-camera BEV fusion. Moreover, such inaccuracies result in errors in depth estimation for the camera branch, ultimately causing misalignment between LiDAR and camera BEV features. In this work, we propose a novel ContrastAlign approach that utilizes contrastive learning to enhance the alignment of heterogeneous modalities, thereby improving the robustness of the fusion process. Specifically, our approach includes the L-Instance module, which directly outputs LiDAR instance features within LiDAR BEV features. Then, we introduce the C-Instance module, which predicts camera instance features through RoI (Region of Interest) pooling on the camera BEV features. We propose the InstanceFusion module, which utilizes contrastive learning to generate similar instance features across heterogeneous modalities. We then use graph matching to calculate the similarity between the neighboring camera instance features and the similarity instance features to complete the alignment of instance features. Our method achieves state-of-the-art performance, with an mAP of 70.3%, surpassing BEVFusion by 1.8% on the nuScenes validation set. Importantly, our method outperforms BEVFusion by 7.3% under conditions with misalignment noise.