🤖 AI Summary
To address registration failures in cross-source point clouds caused by non-uniform density and distribution discrepancies, this paper proposes a density-robust point cloud registration method. Our approach introduces: (1) the first density-robust feature encoder that explicitly decouples geometric structure from sampling density; and (2) a two-stage “lenient generation–strict filtering” matching paradigm, integrating one-to-many initial matching, sparse correspondence optimization, dense geometric refinement, and a cross-source feature alignment strategy. On the Kinect–LiDAR cross-modal benchmark, our method improves matching recall by 63.5 percentage points and registration recall by 57.6 percentage points. On the 3DMatch benchmark, it achieves state-of-the-art performance while demonstrating strong robustness across diverse subsampling densities.
📝 Abstract
Due to the density inconsistency and distribution difference between cross-source point clouds, previous methods fail in cross-source point cloud registration. We propose a density-robust feature extraction and matching scheme to achieve robust and accurate cross-source registration. To address the density inconsistency between cross-source data, we introduce a density-robust encoder for extracting density-robust features. To tackle the issue of challenging feature matching and few correct correspondences, we adopt a loose-to-strict matching pipeline with a ``loose generation, strict selection'' idea. Under it, we employ a one-to-many strategy to loosely generate initial correspondences. Subsequently, high-quality correspondences are strictly selected to achieve robust registration through sparse matching and dense matching. On the challenging Kinect-LiDAR scene in the cross-source 3DCSR dataset, our method improves feature matching recall by 63.5 percentage points (pp) and registration recall by 57.6 pp. It also achieves the best performance on 3DMatch, while maintaining robustness under diverse downsampling densities.