๐ค AI Summary
Existing referring multi-object tracking (RMOT) research focuses predominantly on ground-level scenes, failing to address the semantic understanding and long-range tracking requirements inherent to wide-area aerial perspectives from unmanned aerial vehicles (UAVs). This work bridges that gap by introducing AerialMindโthe first large-scale RMOT benchmark specifically designed for UAV scenarios. We propose COALA, a semi-automatic annotation framework that substantially reduces the cost of aligning multiple objects with natural language expressions. Furthermore, we design HawkEyeTrack, a novel method leveraging vision-language collaborative representation learning, cross-modal feature alignment, and spatiotemporal context modeling to enhance instruction-driven detection and tracking. Experiments demonstrate that AerialMind poses significant challenges, and HawkEyeTrack achieves substantial improvements over state-of-the-art baselines on natural language-guided multi-object tracking. Collectively, this work establishes a critical data foundation and technical framework for embodied intelligent UAV systems.
๐ Abstract
Referring Multi-Object Tracking (RMOT) aims to achieve precise object detection and tracking through natural language instructions, representing a fundamental capability for intelligent robotic systems. However, current RMOT research remains mostly confined to ground-level scenarios, which constrains their ability to capture broad-scale scene contexts and perform comprehensive tracking and path planning. In contrast, Unmanned Aerial Vehicles (UAVs) leverage their expansive aerial perspectives and superior maneuverability to enable wide-area surveillance. Moreover, UAVs have emerged as critical platforms for Embodied Intelligence, which has given rise to an unprecedented demand for intelligent aerial systems capable of natural language interaction. To this end, we introduce AerialMind, the first large-scale RMOT benchmark in UAV scenarios, which aims to bridge this research gap. To facilitate its construction, we develop an innovative semi-automated collaborative agent-based labeling assistant (COALA) framework that significantly reduces labor costs while maintaining annotation quality. Furthermore, we propose HawkEyeTrack (HETrack), a novel method that collaboratively enhances vision-language representation learning and improves the perception of UAV scenarios. Comprehensive experiments validated the challenging nature of our dataset and the effectiveness of our method.