🤖 AI Summary
Existing open-vocabulary aerial object detection methods rely on textual supervision, making them susceptible to semantic bias and limiting generalization to unseen categories. To address this, we propose a vision-language model–driven framework that requires no additional supervision: (1) vision-knowledge-guided prototype learning constructs a class-agnostic detector; (2) a vision-guided pseudo-labeling strategy coupled with prototype-aware clustering explicitly models inter-class decision boundaries, mitigating text-induced bias; and (3) region-feature matching combined with vision-knowledge distillation aligns the detector with multi-granularity semantic spaces. Evaluated on DIOR and DOTA, our method achieves 30.1 and 23.3 mAP⁵⁰, respectively—surpassing state-of-the-art fully supervised approaches. This work establishes the first zero-shot, strongly generalizable paradigm for open-vocabulary aerial object detection.
📝 Abstract
To identify objects beyond predefined categories, open-vocabulary aerial object detection (OVAD) leverages the zero-shot capabilities of visual-language models (VLMs) to generalize from base to novel categories. Existing approaches typically utilize self-learning mechanisms with weak text supervision to generate region-level pseudo-labels to align detectors with VLMs semantic spaces. However, text dependence induces semantic bias, restricting open-vocabulary expansion to text-specified concepts. We propose $ extbf{VK-Det}$, a $ extbf{V}$isual $ extbf{K}$nowledge-guided open-vocabulary object $ extbf{Det}$ection framework $ extit{without}$ extra supervision. First, we discover and leverage vision encoder's inherent informative region perception to attain fine-grained localization and adaptive distillation. Second, we introduce a novel prototype-aware pseudo-labeling strategy. It models inter-class decision boundaries through feature clustering and maps detection regions to latent categories via prototype matching. This enhances attention to novel objects while compensating for missing supervision. Extensive experiments show state-of-the-art performance, achieving 30.1 $mathrm{mAP}^{N}$ on DIOR and 23.3 $mathrm{mAP}^{N}$ on DOTA, outperforming even extra supervised methods.