🤖 AI Summary
Dynamic gesture recognition faces robustness bottlenecks due to inter-subject variations in pose, scale, and deformation. To address this, we propose the Multi-Scale Multi-Head Attention Video Transformer Network (MsMHA-VTN), a novel architecture featuring a pyramid-style multi-scale feature extraction module and the first multi-scale multi-head self-attention mechanism—where each attention head independently adapts to distinct spatiotemporal dimensions, enabling effective cross-scale temporal modeling. The model supports both unimodal (e.g., RGB) and multimodal (RGB-D) gesture recognition. Evaluated on NVGesture and Briareo benchmarks, MsMHA-VTN achieves state-of-the-art accuracy of 88.22% and 99.10%, respectively—substantially outperforming existing methods. These results demonstrate its effectiveness and strong generalization capability for complex dynamic sign language recognition under real-world variability.
📝 Abstract
Dynamic gesture recognition is one of the challenging research areas due to variations in pose, size, and shape of the signer's hand. In this letter, Multiscaled Multi-Head Attention Video Transformer Network (MsMHA-VTN) for dynamic hand gesture recognition is proposed. A pyramidal hierarchy of multiscale features is extracted using the transformer multiscaled head attention model. The proposed model employs different attention dimensions for each head of the transformer which enables it to provide attention at the multiscale level. Further, in addition to single modality, recognition performance using multiple modalities is examined. Extensive experiments demonstrate the superior performance of the proposed MsMHA-VTN with an overall accuracy of 88.22% and 99.10% on NVGesture and Briareo datasets, respectively.