🤖 AI Summary
To address the challenges of Automatic Target Recognition (ATR) in extreme operational domains—such as military applications—where conventional detectors struggle with unknown categories and complex, open-world environments, this paper proposes a collaborative zero-shot detection pipeline integrating open-world detectors (e.g., OW-DETR) with large vision-language models (LVLMs) (e.g., LLaVA, Qwen-VL). Methodologically, it introduces a novel framework combining multimodal prompt engineering with cross-distance and cross-modal performance analysis, jointly optimizing precise object localization and interpretable zero-shot classification confidence. This design overcomes the dual limitations of weak spatial grounding in LVLMs and poor generalization of traditional detectors. Evaluated on zero-shot military vehicle recognition, the system achieves significant gains in accuracy and robustness. Quantitative ablation studies further reveal critical dependencies of performance on observation distance, imaging modality, and prompt strategy—providing actionable insights for real-world ATR deployment.
📝 Abstract
Automatic target recognition (ATR) plays a critical role in tasks such as navigation and surveillance, where safety and accuracy are paramount. In extreme use cases, such as military applications, these factors are often challenged due to the presence of unknown terrains, environmental conditions, and novel object categories. Current object detectors, including open-world detectors, lack the ability to confidently recognize novel objects or operate in unknown environments, as they have not been exposed to these new conditions. However, Large Vision-Language Models (LVLMs) exhibit emergent properties that enable them to recognize objects in varying conditions in a zero-shot manner. Despite this, LVLMs struggle to localize objects effectively within a scene. To address these limitations, we propose a novel pipeline that combines the detection capabilities of open-world detectors with the recognition confidence of LVLMs, creating a robust system for zero-shot ATR of novel classes and unknown domains. In this study, we compare the performance of various LVLMs for recognizing military vehicles, which are often underrepresented in training datasets. Additionally, we examine the impact of factors such as distance range, modality, and prompting methods on the recognition performance, providing insights into the development of more reliable ATR systems for novel conditions and classes.