LIR-LIVO: A Lightweight,Robust LiDAR/Vision/Inertial Odometry with Illumination-Resilient Deep Features

📅 2025-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of lightweight and robust localization under drastic illumination changes and sensor degradation, this paper proposes a tightly coupled LiDAR–visual–inertial odometry framework. Methodologically: (i) we introduce a novel illumination-invariant depth feature extraction mechanism to enhance feature stability in low-light and strong-glare conditions; (ii) we design a LiDAR-guided uniform depth-distribution feature matching strategy to mitigate visual feature sparsity; and (iii) we implement an adaptive joint matching scheme integrating SuperPoint and LightGlue, achieving a favorable accuracy–efficiency trade-off on resource-constrained platforms (e.g., Jetson AGX Orin). Evaluated on NTU-VIRAL, Hilti’22, and R3LIVE benchmarks, our method achieves state-of-the-art performance: on Hilti’22’s low-illumination sequences, pose estimation error is reduced by 32%, while maintaining real-time operation at 35 FPS.

Technology Category

Application Category

📝 Abstract
In this paper, we propose LIR-LIVO, a lightweight and robust LiDAR-inertial-visual odometry system designed for challenging illumination and degraded environments. The proposed method leverages deep learning-based illumination-resilient features and LiDAR-Inertial-Visual Odometry (LIVO). By incorporating advanced techniques such as uniform depth distribution of features enabled by depth association with LiDAR point clouds and adaptive feature matching utilizing Superpoint and LightGlue, LIR-LIVO achieves state-of-the-art (SOTA) accuracy and robustness with low computational cost. Experiments are conducted on benchmark datasets, including NTU-VIRAL, Hilti'22, and R3LIVE-Dataset. The corresponding results demonstrate that our proposed method outperforms other SOTA methods on both standard and challenging datasets. Particularly, the proposed method demonstrates robust pose estimation under poor ambient lighting conditions in the Hilti'22 dataset. The code of this work is publicly accessible on GitHub to facilitate advancements in the robotics community.
Problem

Research questions and friction points this paper is trying to address.

Lightweight LiDAR/vision/inertial odometry
Illumination-resilient deep features
Robust pose estimation in challenging environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes deep learning-based illumination-resilient features
Incorporates LiDAR-inertial-visual odometry for robustness
Employs adaptive feature matching with Superpoint and LightGlue
🔎 Similar Papers
No similar papers found.
S
Shujie Zhou
Z
Zihao Wang
X
Xinye Dai
Weiwei Song
Weiwei Song
Pengcheng Laboratory, https://github.com/weiweisong415
Deep learningremote sensing
S
Shengfeng Gu