HiMo: High-Speed Objects Motion Compensation in Point Clouds

📅 2025-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address point cloud motion distortion caused by moving traffic participants in high-speed scenarios with multi-LiDAR configurations, this paper proposes HiMo—the first motion compensation framework specifically designed for dynamic objects. Methodologically, HiMo extends a self-supervised scene flow network to jointly estimate scene flow and perform point-wise motion compensation, while incorporating a multi-LiDAR spatiotemporal synchronization and calibration strategy to ensure cross-sensor consistency. Contributions include: (1) the first formulation and compensation of point cloud distortions induced by other moving vehicles; (2) two novel evaluation metrics—point-level compensation accuracy and object shape similarity; and (3) a real-world highway dataset featuring heavy-duty vehicles captured with synchronized multi-LiDAR sensors. Experiments on Argoverse 2 and our proprietary dataset demonstrate that HiMo significantly improves dynamic object point cloud completeness and geometric fidelity, achieving state-of-the-art compensation accuracy.

Technology Category

Application Category

📝 Abstract
LiDAR point clouds often contain motion-induced distortions, degrading the accuracy of object appearances in the captured data. In this paper, we first characterize the underlying reasons for the point cloud distortion and show that this is present in public datasets. We find that this distortion is more pronounced in high-speed environments such as highways, as well as in multi-LiDAR configurations, a common setup for heavy vehicles. Previous work has dealt with point cloud distortion from the ego-motion but fails to consider distortion from the motion of other objects. We therefore introduce a novel undistortion pipeline, HiMo, that leverages scene flow estimation for object motion compensation, correcting the depiction of dynamic objects. We further propose an extension of a state-of-the-art self-supervised scene flow method. Due to the lack of well-established motion distortion metrics in the literature, we also propose two metrics for compensation performance evaluation: compensation accuracy at a point level and shape similarity on objects. To demonstrate the efficacy of our method, we conduct extensive experiments on the Argoverse 2 dataset and a new real-world dataset. Our new dataset is collected from heavy vehicles equipped with multi-LiDARs and on highways as opposed to mostly urban settings in the existing datasets. The source code, including all methods and the evaluation data, will be provided upon publication. See https://kin-zhang.github.io/HiMo for more details.
Problem

Research questions and friction points this paper is trying to address.

Addresses motion-induced distortions in LiDAR point clouds
Compensates for object motion in high-speed environments
Proposes new metrics for evaluating motion distortion compensation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scene flow estimation for motion compensation
Extended self-supervised scene flow method
New metrics for motion distortion evaluation
🔎 Similar Papers
No similar papers found.