π€ AI Summary
To address the sparsity of LiDAR point clouds and noise introduced by pseudo-point clouds in multimodal fusion, this paper proposes a LiDAR-centric two-stage refined 3D detection framework. In the first stage, high-precision 3D proposals are generated solely from raw LiDAR data, thereby avoiding noise from vision-based or depth-completion-based pseudo-point cloud generation. In the second stage, depth-completion-derived pseudo-point clouds are selectively incorporated for hard examples; a hierarchical pseudo-point residual encoding module is introduced to explicitly model feature and positional residuals, enhancing local structural representation. Furthermore, an instance-level dual-stage result fusion mechanism is designed to achieve complementary modality advantages. Evaluated on the KITTI benchmark, the method achieves consistent and significant performance gains across all object classes and difficulty levels, demonstrating superior detection accuracy and robustness.
π Abstract
Existing LiDAR-Camera fusion methods have achieved strong results in 3D object detection. To address the sparsity of point clouds, previous approaches typically construct spatial pseudo point clouds via depth completion as auxiliary input and adopts a proposal-refinement framework to generate detection results. However, introducing pseudo points inevitably brings noise, potentially resulting in inaccurate predictions. Considering the differing roles and reliability levels of each modality, we propose LDRFusion, a novel Lidar-dominant two-stage refinement framework for multi-sensor fusion. The first stage soley relies on LiDAR to produce accurately localized proposals, followed by a second stage where pseudo point clouds are incorporated to detect challenging instances. The instance-level results from both stages are subsequently merged. To further enhance the representation of local structures in pseudo point clouds, we present a hierarchical pseudo point residual encoding module, which encodes neighborhood sets using both feature and positional residuals. Experiments on the KITTI dataset demonstrate that our framework consistently achieves strong performance across multiple categories and difficulty levels.