Beyond Words: AuralLLM and SignMST-C for Precise Sign Language Production and Bidirectional Accessibility

📅 2025-01-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address core challenges in sign language production (SLP) and translation (SLT)—including semantic inaccuracy, weak articulatory control, and scarcity of high-quality annotated data—this work introduces the first bilingual sign language accessibility system for Chinese. We propose two benchmark datasets: CNText2Sign (text-to-sign) and CNSign (sign-to-text), the first to provide morpheme-level articulatory annotations for Chinese Sign Language. Methodologically, we design AuralLLM—a LoRA-finetuned large language model augmented with retrieval-augmented generation (RAG) for joint semantic-articulatory modeling—and SignMST-C, a self-supervised video motion pretraining framework enhancing temporal modeling. Experiments show AuralLLM achieves a BLEU-4 score of 50.41 on CNText2Sign; SignMST-C attains BLEU scores of 31.03/32.08 on PHOENIX2014-T, establishing new state-of-the-art performance in SLT. This work establishes the first strong baseline system for both Chinese SLP and SLT.

Technology Category

Application Category

📝 Abstract
Although sign language recognition aids non-hearing-impaired understanding, many hearing-impaired individuals still rely on sign language alone due to limited literacy, underscoring the need for advanced sign language production and translation (SLP and SLT) systems. In the field of sign language production, the lack of adequate models and datasets restricts practical applications. Existing models face challenges in production accuracy and pose control, making it difficult to provide fluent sign language expressions across diverse scenarios. Additionally, data resources are scarce, particularly high-quality datasets with complete sign vocabulary and pose annotations. To address these issues, we introduce CNText2Sign and CNSign, comprehensive datasets to benchmark SLP and SLT, respectively, with CNText2Sign covering gloss and landmark mappings for SLP, and CNSign providing extensive video-to-text data for SLT. To improve the accuracy and applicability of sign language systems, we propose the AuraLLM and SignMST-C models. AuraLLM, incorporating LoRA and RAG techniques, achieves a BLEU-4 score of 50.41 on the CNText2Sign dataset, enabling precise control over gesture semantics and motion. SignMST-C employs self-supervised rapid motion video pretraining, achieving a BLEU-4 score of 31.03/32.08 on the PHOENIX2014-T benchmark, setting a new state-of-the-art. These models establish robust baselines for the datasets released for their respective tasks.
Problem

Research questions and friction points this paper is trying to address.

Sign Language Generation
Accuracy and Fluency
Dataset Limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

AuraLLM
SignMST-C
CNText2Sign_CNSign
🔎 Similar Papers
No similar papers found.
Y
Yulong Li
School of Artificial Intelligence and Advanced Computing, Xi’an Jiaotong-Liverpool University, Suzhou, 215123, China
Y
Yuxuan Zhang
School of Artificial Intelligence and Advanced Computing, Xi’an Jiaotong-Liverpool University, Suzhou, 215123, China
F
Feilong Tang
Monash University, Melbourne, Australia
Mian Zhou
Mian Zhou
Xi'an Jiaotong-Liverpool University
Z
Zhixiang Lu
School of Artificial Intelligence and Advanced Computing, Xi’an Jiaotong-Liverpool University, Suzhou, 215123, China
H
Haochen Xue
School of Artificial Intelligence and Advanced Computing, Xi’an Jiaotong-Liverpool University, Suzhou, 215123, China
Yifang Wang
Yifang Wang
Assistant Professor, Florida State University
Data VisualizationVisual AnalyticsHuman-AI CollaborationGenAIScience of Science
Kang Dang
Kang Dang
Xi'an Jiaotong-Liverpool University
Computer VisionMedicial Image Analysis
Jionglong Su
Jionglong Su
Xi'an Jiaotong-Liverpool University
AI Big Data Machine Learning Statistics