🤖 AI Summary
This work addresses the low localization accuracy and severe long-term drift of quadrotor UAVs in GPS-denied environments. We propose a multi-sensor cooperative localization method incorporating hover constraints into incremental factor graph optimization. The key innovation is the first integration of autonomous hover commands from the flight controller as dynamic model constraints: during hover, zero-velocity constraints are imposed on the velocity states—inspired by Zero-Velocity Updates (ZUPT) for ground robots—to enable high-precision correction of aerial platform motion states. The system fuses LiDAR, IMU, UWB ranging, barometric altimeter, and binocular vision data, with calibration validated via a motion capture system. Experimental results demonstrate significant suppression of trajectory drift: position error is reduced by approximately 37%, markedly improving localization consistency and long-term stability in GNSS-denied scenarios.
📝 Abstract
In this work, we evaluate the use of aerial drone hover constraints in a multisensor fusion of ground robot and drone data to improve the localization performance of a drone. In particular, we build upon our prior work on cooperative localization between an aerial drone and ground robot that fuses data from LiDAR, inertial navigation, peer-to-peer ranging, altimeter, and stereo-vision and evaluate the incorporation knowledge from the autopilot regarding when the drone is hovering. This control command data is leveraged to add constraints on the velocity state. Hover constraints can be considered important dynamic model information, such as the exploitation of zero-velocity updates in pedestrian navigation. We analyze the benefits of these constraints using an incremental factor graph optimization. Experimental data collected in a motion capture faculty is used to provide performance insights and assess the benefits of hover constraints.