🤖 AI Summary
Infrared small target detection (IRSTD) across diverse domains faces critical challenges including poor generalization across imaging systems, spectral bands, and resolutions; strong background clutter; and scarce target features. Existing vision-only approaches neglect the pivotal influence of imaging metadata—such as spectral band, sensor platform, resolution, and viewpoint—on detection performance. To address this, we propose the first metadata-driven, scene-aware dynamic multimodal fusion paradigm. Our method introduces an MLP-based high-dimensional metadata–visual feature coupling module and a lightweight prior-guided 1D convolutional enhancement structure. Evaluated on the WideIRSTD-Full benchmark, our approach comprehensively surpasses state-of-the-art methods, achieving significant improvements in detection robustness and accuracy under complex domain shifts. The framework offers interpretability and scalability, establishing a novel, principled modeling foundation for universal IRSTD.
📝 Abstract
Omni-domain infrared small target detection (IRSTD) poses formidable challenges, as a single model must seamlessly adapt to diverse imaging systems, varying resolutions, and multiple spectral bands simultaneously. Current approaches predominantly rely on visual-only modeling paradigms that not only struggle with complex background interference and inherently scarce target features, but also exhibit limited generalization capabilities across complex omni-scene environments where significant domain shifts and appearance variations occur. In this work, we reveal a critical oversight in existing paradigms: the neglect of readily available auxiliary metadata describing imaging parameters and acquisition conditions, such as spectral bands, sensor platforms, resolution, and observation perspectives. To address this limitation, we propose the Auxiliary Metadata Driven Infrared Small Target Detector (AuxDet), a novel multi-modal framework that fundamentally reimagines the IRSTD paradigm by incorporating textual metadata for scene-aware optimization. Through a high-dimensional fusion module based on multi-layer perceptrons (MLPs), AuxDet dynamically integrates metadata semantics with visual features, guiding adaptive representation learning for each individual sample. Additionally, we design a lightweight prior-initialized enhancement module using 1D convolutional blocks to further refine fused features and recover fine-grained target cues. Extensive experiments on the challenging WideIRSTD-Full benchmark demonstrate that AuxDet consistently outperforms state-of-the-art methods, validating the critical role of auxiliary information in improving robustness and accuracy in omni-domain IRSTD tasks. Code is available at https://github.com/GrokCV/AuxDet.