🤖 AI Summary
This study systematically investigates the impact of speech segmentation width and discrete unit clustering scale on Speech Language Model (SLM) performance. We propose a unified tokenization framework integrating fixed- or variable-width segmentation with multi-scale K-means clustering. Our analysis reveals, for the first time, a synergistic benefit between medium-granularity segmentation and large-scale clustering (>10k units); moreover, multi-token combinations effectively capture fine-grained spoken semantics. On zero-shot Spoken Language Understanding (SLU), the optimal configuration reduces training data requirements by 50% and training time by 70%, while substantially outperforming state-of-the-art methods across multiple benchmarks. The core contribution lies in establishing fundamental trade-offs in speech token design and empirically validating that high-capacity discrete representations yield substantial gains for low-resource SLM training.
📝 Abstract
The purpose of speech tokenization is to transform a speech signal into a sequence of discrete representations, serving as the foundation for speech language models (SLMs). While speech tokenization has many options, their effect on the performance of SLMs remains unclear. This paper investigates two key aspects of speech tokenization: the segmentation width and the cluster size of discrete units. First, we segment speech signals into fixed/variable widths and pooled representations. We then train K-means models in multiple cluster sizes. Through the evaluation on zero-shot spoken language understanding benchmarks, we find the positive effect of moderately coarse segmentation and bigger cluster size. Notably, among the best-performing models, the most efficient one achieves a 50% reduction in training data and a 70% decrease in training runtime. Our analysis highlights the importance of combining multiple tokens to enhance fine-grained spoken language understanding.