Isolated Sign Language Recognition with Segmentation and Pose Estimation

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address data scarcity, high inter-signer variability, and excessive computational overhead in American Sign Language (ASL) recognition, this paper proposes a lightweight and efficient framework for isolated sign language recognition (ISLR). Methodologically, it introduces the first integration of pose-guided key-region segmentation with a hybrid ResNet-Transformer backbone, jointly modeling hand and facial keypoints while learning spatiotemporal representations. Key contributions include: (1) pose-driven semantic segmentation to enhance robustness against signer diversity and occlusion; and (2) architecture-level co-compression to drastically reduce computational cost. Experiments demonstrate state-of-the-art performance across multiple ISLR benchmarks, achieving a 3.2× inference speedup, 41% parameter reduction, and a 12.7% accuracy gain on unseen signers—significantly improving cross-subject generalization and real-world deployability.

Technology Category

Application Category

📝 Abstract
The recent surge in large language models has automated translations of spoken and written languages. However, these advances remain largely inaccessible to American Sign Language (ASL) users, whose language relies on complex visual cues. Isolated sign language recognition (ISLR) - the task of classifying videos of individual signs - can help bridge this gap but is currently limited by scarce per-sign data, high signer variability, and substantial computational costs. We propose a model for ISLR that reduces computational requirements while maintaining robustness to signer variation. Our approach integrates (i) a pose estimation pipeline to extract hand and face joint coordinates, (ii) a segmentation module that isolates relevant information, and (iii) a ResNet-Transformer backbone to jointly model spatial and temporal dependencies.
Problem

Research questions and friction points this paper is trying to address.

Automating translation for American Sign Language users
Overcoming data scarcity and signer variability in recognition
Reducing computational costs for isolated sign language classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pose estimation extracts hand and face joint coordinates
Segmentation module isolates relevant visual information
ResNet-Transformer backbone models spatial and temporal dependencies
D
Daniel Perkins
University of Tennessee, Knoxville
D
Davis Hunter
University of Tennessee, Knoxville
Dhrumil Patel
Dhrumil Patel
PhD @ Cornell University
Quantum AlgorithmsQuantum SimulationOptimization
G
Galen Flanagan
University of Tennessee, Knoxville