Fusion Meets Diverse Conditions: A High-diversity Benchmark and Baseline for UAV-based Multimodal Object Detection with Condition Cues

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RGB-IR UAV target detection datasets suffer from insufficient environmental diversity, failing to capture realistic, complex imaging conditions. Method: We introduce ATR-UMOD, a high-diversity multimodal dataset spanning multiple altitudes, viewpoints, weather conditions, and illumination levels. To leverage this diversity, we propose Prompt-Conditioned Dynamic Fusion (PCDF)—a novel conditional prompting framework that treats imaging conditions as learnable textual prompts to guide adaptive multimodal feature fusion. PCDF incorporates a decoupled condition-prompt module for generalization across scenarios with or without condition annotations and employs soft gating for task-aware dynamic fusion. Results: Extensive experiments on ATR-UMOD demonstrate that PCDF significantly outperforms state-of-the-art fusion methods, achieving superior robustness and detection accuracy under all-weather, multi-view, and multi-altitude conditions.

Technology Category

Application Category

📝 Abstract
Unmanned aerial vehicles (UAV)-based object detection with visible (RGB) and infrared (IR) images facilitates robust around-the-clock detection, driven by advancements in deep learning techniques and the availability of high-quality dataset. However, the existing dataset struggles to fully capture real-world complexity for limited imaging conditions. To this end, we introduce a high-diversity dataset ATR-UMOD covering varying scenarios, spanning altitudes from 80m to 300m, angles from 0° to 75°, and all-day, all-year time variations in rich weather and illumination conditions. Moreover, each RGB-IR image pair is annotated with 6 condition attributes, offering valuable high-level contextual information. To meet the challenge raised by such diverse conditions, we propose a novel prompt-guided condition-aware dynamic fusion (PCDF) to adaptively reassign multimodal contributions by leveraging annotated condition cues. By encoding imaging conditions as text prompts, PCDF effectively models the relationship between conditions and multimodal contributions through a task-specific soft-gating transformation. A prompt-guided condition-decoupling module further ensures the availability in practice without condition annotations. Experiments on ATR-UMOD dataset reveal the effectiveness of PCDF.
Problem

Research questions and friction points this paper is trying to address.

Addressing limited imaging condition diversity in UAV object detection datasets
Developing adaptive multimodal fusion using annotated condition cues
Creating high-diversity benchmark for RGB-IR UAV object detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces high-diversity UAV dataset with varied conditions
Proposes prompt-guided dynamic fusion using condition cues
Encodes imaging conditions as text prompts for adaptation
🔎 Similar Papers
No similar papers found.
C
Chen Chen
National University of Defense Technology, China
K
Kangcheng Bin
National University of Defense Technology, China
Ting Hu
Ting Hu
Associate Professor, School of Computing, Queen's University, Canada
Explainable AIEvolutionary ComputingMachine LearningBioinformatics
J
Jiahao Qi
National University of Defense Technology, China
X
Xingyue Liu
National University of Defense Technology, China
T
Tianpeng Liu
National University of Defense Technology, China
Z
Zhen Liu
National University of Defense Technology, China
Yongxiang Liu
Yongxiang Liu
Professor, National University of Defense Technology
Remote SensingSynthetic Aperture RadarRadarImage ProcessingPattern Recognition
Ping Zhong
Ping Zhong
University of Houston