🤖 AI Summary
This work addresses the challenge of validating multi-UAV collaborative indoor localization algorithms. To this end, we introduce the first multimodal benchmark dataset specifically designed for this task, featuring synchronized acquisition of ultra-wideband (UWB) ranging measurements—including raw channel impulse response (CIR) data—binocular and downward-looking visual streams, inertial measurement unit (IMU) readings, laser altimeter outputs, and ground-truth motion-capture trajectories. The dataset comprehensively covers both line-of-sight (LOS) and non-line-of-sight (NLOS) conditions, as well as high-dynamic maneuvers at speeds up to 4.418 m/s. Notably, it is the first benchmark to jointly incorporate high-fidelity UWB CIR signals and multi-view visual flows for collaborative multi-UAV flight evaluation. We release an open-source, unified evaluation framework supporting diverse algorithms, including visual-inertial odometry (VIO), UWB-aided extended Kalman filter (EKF) localization, and CIR-based NLOS classification. The dataset comprises 36 experiments totaling 217 minutes of high-quality flight data, substantially lowering the barrier for developing and validating multi-sensor fusion localization systems.
📝 Abstract
This paper introduces MILUV, a Multi-UAV Indoor Localization dataset with UWB and Vision measurements. This dataset comprises 217 minutes of flight time over 36 experiments using three quadcopters, collecting ultra-wideband (UWB) ranging data such as the raw timestamps and channel-impulse response data, vision data from a stereo camera and a bottom-facing monocular camera, inertial measurement unit data, height measurements from a laser rangefinder, magnetometer data, and ground-truth poses from a motion-capture system. The UWB data is collected from up to 12 transceivers affixed to mobile robots and static tripods in both line-of-sight and non-line-of-sight conditions. The UAVs fly at a maximum speed of 4.418 m/s in an indoor environment with visual fiducial markers as features. MILUV is versatile and can be used for a wide range of applications beyond localization, but the primary purpose of MILUV is for testing and validating multi-robot UWB- and vision-based localization algorithms. The dataset can be downloaded at https://doi.org/10.25452/figshare.plus.28386041.v1. A development kit is presented alongside the MILUV dataset, which includes benchmarking algorithms such as visual-inertial odometry, UWB-based localization using an extended Kalman filter, and classification of CIR data using machine learning approaches. The development kit can be found at https://github.com/decargroup/miluv, and is supplemented with a website available at https://decargroup.github.io/miluv/.