🤖 AI Summary
To address the severe loss of prosodic and paralinguistic information (e.g., emotion, stress) in quantizing self-supervised speech models like HuBERT, this paper proposes a hierarchical variable-segment codebook quantization architecture. It performs multi-granularity segmentation based on speech units (frames, phonemes, words, or utterances), employs heterogeneous codebooks to disentangle discrete representations, and incorporates pre-discretized feature pooling to enhance segment-level information preservation. Without increasing bit-rate, the method significantly improves performance on paralinguistic detection tasks—including emotion and stress classification—while speech reconstruction experiments demonstrate enhanced stylistic expressiveness, preserved intelligibility, and slight improvements in audio quality. The core contribution lies in the first-ever hierarchical co-quantization of semantic and paralinguistic speech information, establishing a novel paradigm for efficient, high-fidelity speech representation compression.
📝 Abstract
Quantization in SSL speech models (e.g., HuBERT) improves compression and performance in tasks like language modeling, resynthesis, and text-to-speech but often discards prosodic and paralinguistic information (e.g., emotion, prominence). While increasing codebook size mitigates some loss, it inefficiently raises bitrates. We propose Segmentation-Variant Codebooks (SVCs), which quantize speech at distinct linguistic units (frame, phone, word, utterance), factorizing it into multiple streams of segment-specific discrete features. Our results show that SVCs are significantly more effective at preserving prosodic and paralinguistic information across probing tasks. Additionally, we find that pooling before rather than after discretization better retains segment-level information. Resynthesis experiments further confirm improved style realization and slightly improved quality while preserving intelligibility.