🤖 AI Summary
To address the challenge of quantifying uncertainty in online extrinsic calibration of sensors for autonomous driving under dynamic conditions, this paper proposes the first uncertainty-aware real-time extrinsic calibration framework that integrates conformal prediction with Monte Carlo Dropout. The method outputs statistically guaranteed prediction intervals for calibration parameters—e.g., with provable (1−α) coverage—ensuring theoretical validity and plug-and-play compatibility with arbitrary neural network architectures and multimodal visual sensors, including RGB and event cameras. Experiments on KITTI and DSEC demonstrate that our approach significantly improves the reliability and robustness of calibration estimates, enabling high-confidence multi-sensor fusion. To the best of our knowledge, this is the first solution for online extrinsic calibration in dynamic environments that simultaneously provides rigorous statistical guarantees and practical engineering deployability.
📝 Abstract
Accurate sensor calibration is crucial for autonomous systems, yet its uncertainty quantification remains underexplored. We present the first approach to integrate uncertainty awareness into online extrinsic calibration, combining Monte Carlo Dropout with Conformal Prediction to generate prediction intervals with a guaranteed level of coverage. Our method proposes a framework to enhance existing calibration models with uncertainty quantification, compatible with various network architectures. Validated on KITTI (RGB Camera-LiDAR) and DSEC (Event Camera-LiDAR) datasets, we demonstrate effectiveness across different visual sensor types, measuring performance with adapted metrics to evaluate the efficiency and reliability of the intervals. By providing calibration parameters with quantifiable confidence measures, we offer insights into the reliability of calibration estimates, which can greatly improve the robustness of sensor fusion in dynamic environments and usefully serve the Computer Vision community.