WaveMamba: Wavelet-Driven Mamba Fusion for RGB-Infrared Object Detection

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient feature fusion and high-frequency information loss in RGB-infrared cross-modal object detection, this paper proposes WaveMamba—a novel framework that pioneers the integration of discrete wavelet transform (DWT) into multimodal fusion. DWT decomposes features into low-frequency (structural) and high-frequency (textural/edge) components, enabling dedicated processing: a low-frequency Mamba fusion module and a gated attention mechanism facilitate deep cross-modal interaction, while an absolute maximum selection strategy enhances high-frequency representation. Finally, inverse DWT (IDWT) reconstructs the detection head to minimize information degradation. Evaluated on four benchmark datasets, WaveMamba achieves a 4.5% average mAP improvement over state-of-the-art methods, demonstrating superior robustness and effectiveness in complex scenarios.

Technology Category

Application Category

📝 Abstract
Leveraging the complementary characteristics of visible (RGB) and infrared (IR) imagery offers significant potential for improving object detection. In this paper, we propose WaveMamba, a cross-modality fusion method that efficiently integrates the unique and complementary frequency features of RGB and IR decomposed by Discrete Wavelet Transform (DWT). An improved detection head incorporating the Inverse Discrete Wavelet Transform (IDWT) is also proposed to reduce information loss and produce the final detection results. The core of our approach is the introduction of WaveMamba Fusion Block (WMFB), which facilitates comprehensive fusion across low-/high-frequency sub-bands. Within WMFB, the Low-frequency Mamba Fusion Block (LMFB), built upon the Mamba framework, first performs initial low-frequency feature fusion with channel swapping, followed by deep fusion with an advanced gated attention mechanism for enhanced integration. High-frequency features are enhanced using a strategy that applies an ``absolute maximum" fusion approach. These advancements lead to significant performance gains, with our method surpassing state-of-the-art approaches and achieving average mAP improvements of 4.5% on four benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Fusion of RGB and IR images for better object detection
Reducing information loss in cross-modality feature integration
Enhancing low- and high-frequency features for improved detection accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Wavelet-driven Mamba fusion for RGB-IR detection
DWT and IDWT for frequency feature integration
Gated attention and maximum fusion in WMFB
🔎 Similar Papers
No similar papers found.
H
Haodong Zhu
Beihang University, China
W
Wenhao Dong
Beihang University, China
Linlin Yang
Linlin Yang
Communication University of China
Computer VisionMachine Learning
H
Hong Li
Beihang University, China
Yuguang Yang
Yuguang Yang
Microsoft, Amazon Alexa AI, Tsinghua University, Johns Hopkins University
Artificial IntelligenceNatural Language ProcessingStochastic Process & ControlComputational Physics
Y
Yangyang Ren
Beihang University, China
Q
Qingcheng Zhu
Beihang University, China
Z
Zichao Feng
Beihang University, China
C
Changbai Li
Beihang University, China
S
Shaohui Lin
East China Normal University, China
Runqi Wang
Runqi Wang
Beijing Jiaotong University
Few-Shot LearningContinual LearningMuti-Modal
Xiaoyan Luo
Xiaoyan Luo
Beihang university
computer vision
Baochang Zhang
Baochang Zhang
Technische Universität München
Computer assisted interventionMedical image analysisDeep learning