🤖 AI Summary
This study investigates the sensitivity of modern speech models to prosodic stress: whether stressed words are systematically distinguished from neutral words in representation space. We propose a novel residual-analysis framework that models stress as the difference between representations of stressed and neutral word tokens—bypassing reliance on acoustic features or label prediction, and directly characterizing the intrinsic structure of prosodic stress. Experiments show that the resulting residual vectors strongly correlate with phoneme duration changes but exhibit near-chance performance on word identification, confirming their encoding of content-agnostic, purely prosodic information. After fine-tuning on automatic speech recognition (ASR), the residual subspace compresses by 50%, while its clustering structure becomes markedly sharper—yielding lower-dimensional, more discriminative representations. This work establishes a new paradigm for prosody modeling in speech representations: interpretable, low-dimensional, and robust across downstream tasks.
📝 Abstract
This work investigates whether modern speech models are sensitive to prosodic emphasis - whether they encode emphasized and neutral words in systematically different ways. Prior work typically relies on isolated acoustic correlates (e.g., pitch, duration) or label prediction, both of which miss the relational structure of emphasis. This paper proposes a residual-based framework, defining emphasis as the difference between paired neutral and emphasized word representations. Analysis on self-supervised speech models shows that these residuals correlate strongly with duration changes and perform poorly at word identity prediction, indicating a structured, relational encoding of prosodic emphasis. In ASR fine-tuned models, residuals occupy a subspace up to 50% more compact than in pre-trained models, further suggesting that emphasis is encoded as a consistent, low-dimensional transformation that becomes more structured with task-specific learning.