🤖 AI Summary
To address data scarcity, high inter-signer variability, and excessive computational overhead in American Sign Language (ASL) recognition, this paper proposes a lightweight and efficient framework for isolated sign language recognition (ISLR). Methodologically, it introduces the first integration of pose-guided key-region segmentation with a hybrid ResNet-Transformer backbone, jointly modeling hand and facial keypoints while learning spatiotemporal representations. Key contributions include: (1) pose-driven semantic segmentation to enhance robustness against signer diversity and occlusion; and (2) architecture-level co-compression to drastically reduce computational cost. Experiments demonstrate state-of-the-art performance across multiple ISLR benchmarks, achieving a 3.2× inference speedup, 41% parameter reduction, and a 12.7% accuracy gain on unseen signers—significantly improving cross-subject generalization and real-world deployability.
📝 Abstract
The recent surge in large language models has automated translations of spoken and written languages. However, these advances remain largely inaccessible to American Sign Language (ASL) users, whose language relies on complex visual cues. Isolated sign language recognition (ISLR) - the task of classifying videos of individual signs - can help bridge this gap but is currently limited by scarce per-sign data, high signer variability, and substantial computational costs. We propose a model for ISLR that reduces computational requirements while maintaining robustness to signer variation. Our approach integrates (i) a pose estimation pipeline to extract hand and face joint coordinates, (ii) a segmentation module that isolates relevant information, and (iii) a ResNet-Transformer backbone to jointly model spatial and temporal dependencies.