🤖 AI Summary
This work addresses daily communication barriers faced by individuals with hearing impairments due to atypical speech articulation. We propose an instruction-driven audiovisual personal assistant specifically designed for this population. Methodologically, we introduce a multimodal preprocessing framework tailored to hearing-impaired speech, incorporating the Omni-Model paradigm—featuring facial landmark-guided lip stabilization, quality-aware curriculum learning, and a novel unified 3D-Resampler for robust fusion of ambiguous acoustic signals and dynamic lip movements. Evaluated on our newly constructed HI-Dialogue dataset, the model achieves state-of-the-art performance in semantic fidelity and literal accuracy. Key contributions include: (1) the first end-to-end joint modeling paradigm for hearing-impaired speech-to-text translation and dialogue understanding; (2) a reusable multimodal preprocessing toolkit; and (3) a systematic technical framework for robust modeling of atypical articulation.
📝 Abstract
Hearing-impaired individuals often face significant barriers in daily communication due to the inherent challenges of producing clear speech. To address this, we introduce the Omni-Model paradigm into assistive technology and present HI-TransPA, an instruction-driven audio-visual personal assistant. The model fuses indistinct speech with lip dynamics, enabling both translation and dialogue within a single multimodal framework. To address the distinctive pronunciation patterns of hearing-impaired speech and the limited adaptability of existing models, we develop a multimodal preprocessing and curation pipeline that detects facial landmarks, stabilizes the lip region, and quantitatively evaluates sample quality. These quality scores guide a curriculum learning strategy that first trains on clean, high-confidence samples and progressively incorporates harder cases to strengthen model robustness. Architecturally, we employs a novel unified 3D-Resampler to efficiently encode the lip dynamics, which is critical for accurate interpretation. Experiments on purpose-built HI-Dialogue dataset show that HI-TransPA achieves state-of-the-art performance in both literal accuracy and semantic fidelity. Our work establishes a foundation for applying Omni-Models to assistive communication technology, providing an end-to-end modeling framework and essential processing tools for future research.