🤖 AI Summary
Open-vocabulary object detection (OVD) suffers from domain shift and poor generalization on remote sensing (RS) imagery due to reliance on natural-image pretraining. To address this, we introduce Locate Anything on Earth (LAE), the first open-vocabulary detection task specifically designed for RS. We construct LAE-1M—the first million-scale RS detection dataset—and propose LAE-DINO, a DINO-based detector enhanced with two novel modules: Dynamic Vocabulary Construction (DVC) and Vision-Guided Text Prompt Learning (VisGT), which explicitly bridge semantic and distributional gaps between natural and RS imagery. Extensive experiments demonstrate that LAE-DINO achieves state-of-the-art performance on established benchmarks (DIOR, DOTA-v2.0) and a newly introduced fine-grained benchmark, LAE-80C. Our results validate the effectiveness of LAE’s task formulation, LAE-1M’s scale and diversity, and the architectural innovations in LAE-DINO, establishing a new foundation for open-vocabulary detection in remote sensing.
📝 Abstract
Object detection, particularly open-vocabulary object detection, plays a crucial role in Earth sciences, such as environmental monitoring, natural disaster assessment, and land-use planning. However, existing open-vocabulary detectors, primarily trained on natural-world images, struggle to generalize to remote sensing images due to a significant data domain gap. Thus, this paper aims to advance the development of open-vocabulary object detection in remote sensing community. To achieve this, we first reformulate the task as Locate Anything on Earth (LAE) with the goal of detecting any novel concepts on Earth. We then developed the LAE-Label Engine which collects, auto-annotates, and unifies up to 10 remote sensing datasets creating the LAE-1M - the first large-scale remote sensing object detection dataset with broad category coverage. Using the LAE-1M, we further propose and train the novel LAE-DINO Model, the first open-vocabulary foundation object detector for the LAE task, featuring Dynamic Vocabulary Construction (DVC) and Visual-Guided Text Prompt Learning (VisGT) modules. DVC dynamically constructs vocabulary for each training batch, while VisGT maps visual features to semantic space, enhancing text features. We comprehensively conduct experiments on established remote sensing benchmark DIOR, DOTAv2.0, as well as our newly introduced 80-class LAE-80C benchmark. Results demonstrate the advantages of the LAE-1M dataset and the effectiveness of the LAE-DINO method.