Sign Spotting Disambiguation using Large Language Models

📅 2025-07-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing core challenges in continuous sign language video recognition—namely, scarce annotated data, rigid vocabulary constraints, and contextual ambiguity—this paper proposes a training-free zero-shot framework. First, it extracts global spatiotemporal and handshape features, then performs lexicon-level sign matching via Dynamic Time Warping (DTW) and cosine similarity. Subsequently, a large language model (LLM) conducts context-aware beam search for disambiguation, without any LLM fine-tuning. The method enables flexible vocabulary expansion and fine-grained temporal localization, substantially reducing reliance on labeled data. Evaluated on both synthetic and real-world sign language datasets, it achieves superior recognition accuracy and sentence fluency compared to conventional approaches. Crucially, it provides the first empirical validation of LLMs’ effectiveness and generalization capability in zero-shot sign language localization.

Technology Category

Application Category

📝 Abstract
Sign spotting, the task of identifying and localizing individual signs within continuous sign language video, plays a pivotal role in scaling dataset annotations and addressing the severe data scarcity issue in sign language translation. While automatic sign spotting holds great promise for enabling frame-level supervision at scale, it grapples with challenges such as vocabulary inflexibility and ambiguity inherent in continuous sign streams. Hence, we introduce a novel, training-free framework that integrates Large Language Models (LLMs) to significantly enhance sign spotting quality. Our approach extracts global spatio-temporal and hand shape features, which are then matched against a large-scale sign dictionary using dynamic time warping and cosine similarity. This dictionary-based matching inherently offers superior vocabulary flexibility without requiring model retraining. To mitigate noise and ambiguity from the matching process, an LLM performs context-aware gloss disambiguation via beam search, notably without fine-tuning. Extensive experiments on both synthetic and real-world sign language datasets demonstrate our method's superior accuracy and sentence fluency compared to traditional approaches, highlighting the potential of LLMs in advancing sign spotting.
Problem

Research questions and friction points this paper is trying to address.

Identifying signs in continuous sign language videos
Overcoming vocabulary inflexibility and ambiguity in sign spotting
Enhancing sign spotting accuracy without model retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free framework integrating LLMs
Global spatio-temporal and hand shape features
LLM-based context-aware gloss disambiguation
🔎 Similar Papers
No similar papers found.