🤖 AI Summary
To address the high computational cost and difficulty in efficiently modeling heterogeneous multimodal temporal data in multi-view action recognition, this paper proposes MV-GMN, a lightweight and efficient state-space model. Methodologically, it introduces (1) a novel bidirectional state-space block enabling four-directional scanning—prioritizing both views and time—and (2) a hybrid GCN module integrating regular graphs with KNN graphs to jointly model RGB and skeleton modalities, multiple views, and temporal segments. With linear inference complexity, MV-GMN significantly outperforms Transformer-based baselines in efficiency. On the NTU RGB+D 120 dataset, it achieves 97.3% accuracy under the cross-subject protocol and 96.7% under the cross-view protocol, demonstrating both superior computational efficiency and strong representational capacity for multi-view action recognition.
📝 Abstract
Recent advancements in multi-view action recognition have largely relied on Transformer-based models. While effective and adaptable, these models often require substantial computational resources, especially in scenarios with multiple views and multiple temporal sequences. Addressing this limitation, this paper introduces the MV-GMN model, a state-space model specifically designed to efficiently aggregate multi-modal data (RGB and skeleton), multi-view perspectives, and multi-temporal information for action recognition with reduced computational complexity. The MV-GMN model employs an innovative Multi-View Graph Mamba network comprising a series of MV-GMN blocks. Each block includes a proposed Bidirectional State Space Block and a GCN module. The Bidirectional State Space Block introduces four scanning strategies, including view-prioritized and time-prioritized approaches. The GCN module leverages rule-based and KNN-based methods to construct the graph network, effectively integrating features from different viewpoints and temporal instances. Demonstrating its efficacy, MV-GMN outperforms the state-of-the-arts on several datasets, achieving notable accuracies of 97.3% and 96.7% on the NTU RGB+D 120 dataset in cross-subject and cross-view scenarios, respectively. MV-GMN also surpasses Transformer-based baselines while requiring only linear inference complexity, underscoring the model's ability to reduce computational load and enhance the scalability and applicability of multi-view action recognition technologies.