AuxDet: Auxiliary Metadata Matters for Omni-Domain Infrared Small Target Detection

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Infrared small target detection (IRSTD) across diverse domains faces critical challenges including poor generalization across imaging systems, spectral bands, and resolutions; strong background clutter; and scarce target features. Existing vision-only approaches neglect the pivotal influence of imaging metadata—such as spectral band, sensor platform, resolution, and viewpoint—on detection performance. To address this, we propose the first metadata-driven, scene-aware dynamic multimodal fusion paradigm. Our method introduces an MLP-based high-dimensional metadata–visual feature coupling module and a lightweight prior-guided 1D convolutional enhancement structure. Evaluated on the WideIRSTD-Full benchmark, our approach comprehensively surpasses state-of-the-art methods, achieving significant improvements in detection robustness and accuracy under complex domain shifts. The framework offers interpretability and scalability, establishing a novel, principled modeling foundation for universal IRSTD.

Technology Category

Application Category

📝 Abstract
Omni-domain infrared small target detection (IRSTD) poses formidable challenges, as a single model must seamlessly adapt to diverse imaging systems, varying resolutions, and multiple spectral bands simultaneously. Current approaches predominantly rely on visual-only modeling paradigms that not only struggle with complex background interference and inherently scarce target features, but also exhibit limited generalization capabilities across complex omni-scene environments where significant domain shifts and appearance variations occur. In this work, we reveal a critical oversight in existing paradigms: the neglect of readily available auxiliary metadata describing imaging parameters and acquisition conditions, such as spectral bands, sensor platforms, resolution, and observation perspectives. To address this limitation, we propose the Auxiliary Metadata Driven Infrared Small Target Detector (AuxDet), a novel multi-modal framework that fundamentally reimagines the IRSTD paradigm by incorporating textual metadata for scene-aware optimization. Through a high-dimensional fusion module based on multi-layer perceptrons (MLPs), AuxDet dynamically integrates metadata semantics with visual features, guiding adaptive representation learning for each individual sample. Additionally, we design a lightweight prior-initialized enhancement module using 1D convolutional blocks to further refine fused features and recover fine-grained target cues. Extensive experiments on the challenging WideIRSTD-Full benchmark demonstrate that AuxDet consistently outperforms state-of-the-art methods, validating the critical role of auxiliary information in improving robustness and accuracy in omni-domain IRSTD tasks. Code is available at https://github.com/GrokCV/AuxDet.
Problem

Research questions and friction points this paper is trying to address.

Detecting infrared small targets across diverse domains and conditions
Overcoming limited generalization in visual-only IRSTD models
Integrating auxiliary metadata for adaptive target detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporates textual metadata for scene-aware optimization
Dynamically integrates metadata with visual features
Uses 1D convolutional blocks to refine features
Yangting Shi
Yangting Shi
Northwestern Polytechnical University
Infrared small target detection
R
Renjie He
School of Electronics and Information, Northwestern Polytechnical University
Le Hui
Le Hui
Northwestern Polytechnical University
point cloud
X
Xiang Li
VCIP, School of Computer Science, Nankai University
J
Jian Yang
VCIP, School of Computer Science, Nankai University
Ming-Ming Cheng
Ming-Ming Cheng
Professor of Computer Science, Nankai University
Computer VisionComputer GraphicsVisual AttentionSaliency
Y
Yimian Dai
VCIP, School of Computer Science, Nankai University