🤖 AI Summary
To address the challenge of lightweight and robust localization under drastic illumination changes and sensor degradation, this paper proposes a tightly coupled LiDAR–visual–inertial odometry framework. Methodologically: (i) we introduce a novel illumination-invariant depth feature extraction mechanism to enhance feature stability in low-light and strong-glare conditions; (ii) we design a LiDAR-guided uniform depth-distribution feature matching strategy to mitigate visual feature sparsity; and (iii) we implement an adaptive joint matching scheme integrating SuperPoint and LightGlue, achieving a favorable accuracy–efficiency trade-off on resource-constrained platforms (e.g., Jetson AGX Orin). Evaluated on NTU-VIRAL, Hilti’22, and R3LIVE benchmarks, our method achieves state-of-the-art performance: on Hilti’22’s low-illumination sequences, pose estimation error is reduced by 32%, while maintaining real-time operation at 35 FPS.
📝 Abstract
In this paper, we propose LIR-LIVO, a lightweight and robust LiDAR-inertial-visual odometry system designed for challenging illumination and degraded environments. The proposed method leverages deep learning-based illumination-resilient features and LiDAR-Inertial-Visual Odometry (LIVO). By incorporating advanced techniques such as uniform depth distribution of features enabled by depth association with LiDAR point clouds and adaptive feature matching utilizing Superpoint and LightGlue, LIR-LIVO achieves state-of-the-art (SOTA) accuracy and robustness with low computational cost. Experiments are conducted on benchmark datasets, including NTU-VIRAL, Hilti'22, and R3LIVE-Dataset. The corresponding results demonstrate that our proposed method outperforms other SOTA methods on both standard and challenging datasets. Particularly, the proposed method demonstrates robust pose estimation under poor ambient lighting conditions in the Hilti'22 dataset. The code of this work is publicly accessible on GitHub to facilitate advancements in the robotics community.