Strip-Fusion: Spatiotemporal Fusion for Multispectral Pedestrian Detection

📅 2026-01-25
🏛️ IEEE Robotics and Automation Letters
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multispectral pedestrian detection often suffers from performance degradation due to the neglect of temporal information, modality misalignment, illumination variations, and severe occlusion. To address these challenges, this work proposes Strip-Fusion, the first approach to jointly model spatiotemporal cues in this task. It employs temporally adaptive convolution to dynamically weight features for capturing motion clues, introduces a KL divergence–based loss function to mitigate modality imbalance, and incorporates a lightweight post-processing strategy to suppress false positives. The method achieves state-of-the-art performance on the KAIST and CVC-14 benchmarks, demonstrating significant improvements over existing approaches—particularly under challenging conditions such as heavy occlusion and cross-modality misalignment.

Technology Category

Application Category

📝 Abstract
Pedestrian detection is a critical task in robot perception. Multispectral modalities (visible light and thermal) can boost pedestrian detection performance by providing complementary visual information. Several gaps remain with multispectral pedestrian detection methods. First, existing approaches primarily focus on spatial fusion and often neglect temporal information. Second, RGB and thermal image pairs in multispectral benchmarks may not always be perfectly aligned. Pedestrians are also challenging to detect due to varying lighting conditions, occlusion, etc. This work proposes Strip-Fusion, a spatial-temporal fusion network that is robust to misalignment in input images, as well as varying lighting conditions and heavy occlusions. The Strip-Fusion pipeline integrates temporally adaptive convolutions to dynamically weigh spatial-temporal features, enabling our model to better capture pedestrian motion and context over time. A novel Kullback–Leibler divergence loss was designed to mitigate modality imbalance between visible and thermal inputs, guiding feature alignment toward the more informative modality during training. Furthermore, a novel post-processing algorithm was developed to reduce false positives. Extensive experimental results show that our method performs competitively for both the KAIST and the CVC-14 benchmarks. We also observed significant improvements compared to previous state-of-the-art on challenging conditions such as heavy occlusion and misalignment.
Problem

Research questions and friction points this paper is trying to address.

multispectral pedestrian detection
spatiotemporal fusion
modality misalignment
occlusion
lighting variation
Innovation

Methods, ideas, or system contributions that make the work stand out.

spatiotemporal fusion
temporally adaptive convolution
modality imbalance
KL divergence loss
misalignment robustness
🔎 Similar Papers
2024-03-22IEEE transactions on circuits and systems for video technology (Print)Citations: 2
A
A. Kanu-Asiegbu
Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109 USA
N
Nitin Jotwani
Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor, MI 48109 USA
Xiaoxiao Du
Xiaoxiao Du
University of Michigan
Machine LearningPerceptionPattern RecognitionSignal and Image ProcessingComputational Intelligence