RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of online geometric calibration in radar-camera systems—complicated by sparse height measurements and substantial measurement noise—this paper proposes the first end-to-end differentiable auto-calibration framework. Methodologically, it introduces dual-view feature representation with selective fusion, a multimodal cross-attention matching module, and a noise-robust supervised matcher to enable robust, fully automatic runtime calibration without human intervention. The core contribution lies in the first incorporation of radar height uncertainty modeling directly into the end-to-end network, enabling fully automated, online, pixel-level geometric alignment. Evaluated on the nuScenes benchmark, our method significantly outperforms existing radar-camera and LiDAR-camera calibration approaches, establishing a new state-of-the-art. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
This paper presents a groundbreaking approach - the first online automatic geometric calibration method for radar and camera systems. Given the significant data sparsity and measurement uncertainty in radar height data, achieving automatic calibration during system operation has long been a challenge. To address the sparsity issue, we propose a Dual-Perspective representation that gathers features from both frontal and bird's-eye views. The frontal view contains rich but sensitive height information, whereas the bird's-eye view provides robust features against height uncertainty. We thereby propose a novel Selective Fusion Mechanism to identify and fuse reliable features from both perspectives, reducing the effect of height uncertainty. Moreover, for each view, we incorporate a Multi-Modal Cross-Attention Mechanism to explicitly find location correspondences through cross-modal matching. During the training phase, we also design a Noise-Resistant Matcher to provide better supervision and enhance the robustness of the matching mechanism against sparsity and height uncertainty. Our experimental results, tested on the nuScenes dataset, demonstrate that our method significantly outperforms previous radar-camera auto-calibration methods, as well as existing state-of-the-art LiDAR-camera calibration techniques, establishing a new benchmark for future research. The code is available at https://github.com/nycu-acm/RC-AutoCalib.
Problem

Research questions and friction points this paper is trying to address.

Online automatic geometric calibration for radar-camera systems
Addressing radar data sparsity and height uncertainty
Improving feature fusion and cross-modal matching robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-Perspective representation for feature gathering
Selective Fusion Mechanism for reliable feature fusion
Multi-Modal Cross-Attention Mechanism for location correspondences
🔎 Similar Papers
No similar papers found.
V
Van-Tin Luu
National Yang Ming Chiao Tung University, Taiwan
Y
Yon-Lin Cai
National Yang Ming Chiao Tung University, Taiwan
Vu-Hoang Tran
Vu-Hoang Tran
Faculty of Electrical and Electronics Engineering, Hochiminh City University of Technology and Education, Viet Nam
Machine LearningComputer VisionAIDeep LearningTransfer Learning
W
Wei-Chen Chiu
National Yang Ming Chiao Tung University, Taiwan
Y
Yi-Ting Chen
National Yang Ming Chiao Tung University, Taiwan
Ching-Chun Huang
Ching-Chun Huang
National Yang Ming Chiao Tung University
Computer VisionSignal ProcessingMachine Learning