🤖 AI Summary
Extrinsic calibration of multi-LiDAR and multi-camera systems suffers from low accuracy and strong dependence on high-quality initial pose priors.
Method: This paper proposes a cross-modal joint calibration method based on a customized ChArUco calibration target. It introduces a unified nonlinear optimization framework that simultaneously supports LiDAR–LiDAR, camera–camera, and LiDAR–camera extrinsic calibration—regardless of sensor modality or pairing. Robust multi-source feature correspondences are established via the ChArUco board, enabling joint optimization of all extrinsic parameters.
Contribution/Results: The method significantly reduces sensitivity to initialization while enhancing calibration robustness and generality. Evaluated in a real-world warehouse environment, it achieves an average reprojection error < 0.35 pixels and LiDAR point-cloud alignment error < 1.2 cm. Calibration results are stable and converge rapidly, marking the first end-to-end unified calibration pipeline for multi-LiDAR–multi-camera systems.
📝 Abstract
Extrinsic Calibration represents the cornerstone of autonomous driving. Its accuracy plays a crucial role in the perception pipeline, as any errors can have implications for the safety of the vehicle. Modern sensor systems collect different types of data from the environment, making it harder to align the data. To this end, we propose a target-based extrinsic calibration system tailored for a multi-LiDAR and multi-camera sensor suite. This system enables cross-calibration between LiDARs and cameras with limited prior knowledge using a custom ChArUco board and a tailored nonlinear optimization method. We test the system with real-world data gathered in a warehouse. Results demonstrated the effectiveness of the proposed method, highlighting the feasibility of a unique pipeline tailored for various types of sensors.