VK-Det: Visual Knowledge Guided Prototype Learning for Open-Vocabulary Aerial Object Detection

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing open-vocabulary aerial object detection methods rely on textual supervision, making them susceptible to semantic bias and limiting generalization to unseen categories. To address this, we propose a vision-language model–driven framework that requires no additional supervision: (1) vision-knowledge-guided prototype learning constructs a class-agnostic detector; (2) a vision-guided pseudo-labeling strategy coupled with prototype-aware clustering explicitly models inter-class decision boundaries, mitigating text-induced bias; and (3) region-feature matching combined with vision-knowledge distillation aligns the detector with multi-granularity semantic spaces. Evaluated on DIOR and DOTA, our method achieves 30.1 and 23.3 mAP⁵⁰, respectively—surpassing state-of-the-art fully supervised approaches. This work establishes the first zero-shot, strongly generalizable paradigm for open-vocabulary aerial object detection.

Technology Category

Application Category

📝 Abstract
To identify objects beyond predefined categories, open-vocabulary aerial object detection (OVAD) leverages the zero-shot capabilities of visual-language models (VLMs) to generalize from base to novel categories. Existing approaches typically utilize self-learning mechanisms with weak text supervision to generate region-level pseudo-labels to align detectors with VLMs semantic spaces. However, text dependence induces semantic bias, restricting open-vocabulary expansion to text-specified concepts. We propose $ extbf{VK-Det}$, a $ extbf{V}$isual $ extbf{K}$nowledge-guided open-vocabulary object $ extbf{Det}$ection framework $ extit{without}$ extra supervision. First, we discover and leverage vision encoder's inherent informative region perception to attain fine-grained localization and adaptive distillation. Second, we introduce a novel prototype-aware pseudo-labeling strategy. It models inter-class decision boundaries through feature clustering and maps detection regions to latent categories via prototype matching. This enhances attention to novel objects while compensating for missing supervision. Extensive experiments show state-of-the-art performance, achieving 30.1 $mathrm{mAP}^{N}$ on DIOR and 23.3 $mathrm{mAP}^{N}$ on DOTA, outperforming even extra supervised methods.
Problem

Research questions and friction points this paper is trying to address.

Overcoming text-induced semantic bias in open-vocabulary aerial object detection
Enhancing novel object recognition without additional supervision requirements
Aligning detector semantic spaces with visual knowledge instead of text dependence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual knowledge guides prototype learning without supervision
Feature clustering models inter-class decision boundaries
Prototype matching maps detection regions to latent categories
🔎 Similar Papers
No similar papers found.
J
Jianhang Yao
National University of Defense Technology
Y
Yongbin Zheng
National University of Defense Technology
Siqi Lu
Siqi Lu
College of William and Mary
computer visionmachine learningmedical imaging
W
Wanying Xu
National University of Defense Technology
P
Peng Sun
National University of Defense Technology