🤖 AI Summary
To address the degradation of face/pedestrian re-identification (ReID) performance under frequent occlusions in surveillance scenarios, this paper proposes the Motion-Aware Fusion (MOTAR-FUSE) network, which for the first time implicitly models motion cues from single-frame static images to enhance feature discriminability. Our method introduces three key innovations: (1) a lightweight visual adapter with a dual-input architecture jointly encodes appearance and motion priors; (2) a motion-aware Transformer trained via a motion-consistency self-supervised task to learn dynamic human representations; and (3) unified modeling and cross-modal feature fusion across static images, occluded images, and video sequences. Evaluated on comprehensive ReID benchmarks—including full-scene, occlusion-specific, and video-based settings—MOTAR-FUSE achieves state-of-the-art performance. Notably, it delivers substantial gains in matching accuracy under severe occlusion, demonstrating robustness and generalizability across diverse surveillance conditions.
📝 Abstract
Navigating the complexities of person re-identification (ReID) in varied surveillance scenarios, particularly when occlusions occur, poses significant challenges. We introduce an innovative Motion-Aware Fusion (MOTAR-FUSE) network that utilizes motion cues derived from static imagery to significantly enhance ReID capabilities. This network incorporates a dual-input visual adapter capable of processing both images and videos, thereby facilitating more effective feature extraction. A unique aspect of our approach is the integration of a motion consistency task, which empowers the motion-aware transformer to adeptly capture the dynamics of human motion. This technique substantially improves the recognition of features in scenarios where occlusions are prevalent, thereby advancing the ReID process. Our comprehensive evaluations across multiple ReID benchmarks, including holistic, occluded, and video-based scenarios, demonstrate that our MOTAR-FUSE network achieves superior performance compared to existing approaches.