🤖 AI Summary
Accurate relative localization of unmanned aerial vehicles (UAVs) with respect to unmanned ground vehicles (UGVs) remains challenging in GPS-denied environments.
Method: This paper proposes a real-time 3D detection and 6-DoF relative pose estimation method leveraging a UGV-mounted LiDAR and the PointPillars deep learning architecture—specifically, pillar-based voxelization followed by 2D CNN processing. To our knowledge, this is the first application of PointPillars to UAV relative localization, replacing conventional pipelines involving point cloud segmentation, Euclidean clustering, and heuristic rules. The approach performs end-to-end point cloud processing for 3D UAV detection and integrates geometric constraints to solve for full six-degree-of-freedom relative pose.
Contribution/Results: Evaluated in real-world GPS-denied scenarios, the method achieves a 37.2% improvement in localization accuracy over baseline approaches, with markedly enhanced robustness and stability. Validation against ground truth confirms its effectiveness. This work establishes a scalable, lightweight deep learning paradigm for multi-agent collaborative perception and localization.
📝 Abstract
This paper explores the use of applying a deep learning approach for 3D object detection to compute the relative position of an Unmanned Aerial Vehicle (UAV) from an Unmanned Ground Vehicle (UGV) equipped with a LiDAR sensor in a GPS Denied environment. This was achieved by evaluating the LiDAR sensor's data through a 3D detection algorithm (PointPillars). The PointPillars algorithm incorporates a column voxel point-cloud representation and a 2D Convolutional Neural Network (CNN) to generate distinctive point-cloud features representing the object to be identified, in this case, the UAV. The current localization method utilizes point-cloud segmentation, Euclidean clustering, and predefined heuristics to obtain the relative position of the UAV. Results from the two methods were then compared to a reference truth solution.