RGB-Event based Pedestrian Attribute Recognition: A Benchmark Dataset and An Asymmetric RWKV Fusion Framework

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RGB-based pedestrian attribute recognition methods suffer from poor robustness under low-light and high-speed conditions, while neglecting the affective dimension. To address this, we propose the first multimodal RGB-Event pedestrian attribute recognition task that jointly models appearance and emotion. We introduce EventPAR, a large-scale, fine-grained benchmark comprising 100K RGB-Event sample pairs annotated with 50 appearance attributes and 6 emotion categories, along with a unified evaluation protocol. Methodologically, we design a RWKV-based visual encoder and an asymmetric RWKV cross-modal fusion module, overcoming limitations of CNN/Transformer architectures to enable efficient heterogeneous modality alignment and lightweight temporal fusion. Our approach achieves state-of-the-art performance on EventPAR, MARS-Attribute, and DukeMTMC-VID-Attribute. Both code and dataset are publicly released to advance multimodal fine-grained behavioral understanding.

Technology Category

Application Category

📝 Abstract
Existing pedestrian attribute recognition methods are generally developed based on RGB frame cameras. However, these approaches are constrained by the limitations of RGB cameras, such as sensitivity to lighting conditions and motion blur, which hinder their performance. Furthermore, current attribute recognition primarily focuses on analyzing pedestrians' external appearance and clothing, lacking an exploration of emotional dimensions. In this paper, we revisit these issues and propose a novel multi-modal RGB-Event attribute recognition task by drawing inspiration from the advantages of event cameras in low-light, high-speed, and low-power consumption. Specifically, we introduce the first large-scale multi-modal pedestrian attribute recognition dataset, termed EventPAR, comprising 100K paired RGB-Event samples that cover 50 attributes related to both appearance and six human emotions, diverse scenes, and various seasons. By retraining and evaluating mainstream PAR models on this dataset, we establish a comprehensive benchmark and provide a solid foundation for future research in terms of data and algorithmic baselines. In addition, we propose a novel RWKV-based multi-modal pedestrian attribute recognition framework, featuring an RWKV visual encoder and an asymmetric RWKV fusion module. Extensive experiments are conducted on our proposed dataset as well as two simulated datasets (MARS-Attribute and DukeMTMC-VID-Attribute), achieving state-of-the-art results. The source code and dataset will be released on https://github.com/Event-AHU/OpenPAR
Problem

Research questions and friction points this paper is trying to address.

Overcoming RGB camera limitations in pedestrian attribute recognition
Expanding attribute recognition to include emotional dimensions
Creating a multi-modal RGB-Event dataset and fusion framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

First large-scale RGB-Event pedestrian dataset EventPAR
Asymmetric RWKV fusion for multi-modal recognition
RWKV visual encoder handles diverse conditions effectively
🔎 Similar Papers
No similar papers found.
X
Xiao Wang
School of Computer Science and Technology, Anhui University, Hefei, China
H
Haiyang Wang
School of Computer Science and Technology, Anhui University, Hefei, China
Shiao Wang
Shiao Wang
安徽大学
Deep Learning
Q
Qiang Chen
School of Computer Science and Technology, Anhui University, Hefei, China
J
Jiandong Jin
School of Artificial Intelligence, Anhui University, Hefei, China
H
Haoyu Song
School of Computer Science and Technology, Anhui University, Hefei, China
B
Bo Jiang
School of Computer Science and Technology, Anhui University, Hefei, China
Chenglong Li
Chenglong Li
Professor, The University of Florida
Drug DesignDrug DiscoveryMolecular RecognitionMolecular ModelingProtein structure and Dynamics