🤖 AI Summary
To address the scarcity of annotated data limiting deep model performance in multispectral object detection, this work pioneers the integration of vision-language models (VLMs) into this domain, introducing a cross-modal semantic alignment mechanism that effectively transfers textual priors to heterogeneous thermal-infrared and visible-spectrum modalities. Building upon Grounding DINO and YOLO-World architectures, we design a text-guided multimodal feature fusion module enabling joint detection under few-shot conditions. On the FLIR and M3FD benchmarks, our method significantly outperforms dedicated multispectral detectors in few-shot settings and achieves state-of-the-art or comparable performance under full supervision. This study demonstrates that semantic priors—encoded via VLMs—mitigate both spectral domain shift and annotation scarcity, offering a novel paradigm for low-resource multispectral perception.
📝 Abstract
Multispectral object detection is critical for safety-sensitive applications such as autonomous driving and surveillance, where robust perception under diverse illumination conditions is essential. However, the limited availability of annotated multispectral data severely restricts the training of deep detectors. In such data-scarce scenarios, textual class information can serve as a valuable source of semantic supervision. Motivated by the recent success of Vision-Language Models (VLMs) in computer vision, we explore their potential for few-shot multispectral object detection. Specifically, we adapt two representative VLM-based detectors, Grounding DINO and YOLO-World, to handle multispectral inputs and propose an effective mechanism to integrate text, visual and thermal modalities. Through extensive experiments on two popular multispectral image benchmarks, FLIR and M3FD, we demonstrate that VLM-based detectors not only excel in few-shot regimes, significantly outperforming specialized multispectral models trained with comparable data, but also achieve competitive or superior results under fully supervised settings. Our findings reveal that the semantic priors learned by large-scale VLMs effectively transfer to unseen spectral modalities, ofFering a powerful pathway toward data-efficient multispectral perception.