🤖 AI Summary
This work investigates the fundamental bottlenecks limiting word segmentation and clustering performance in unsupervised spoken vocabulary learning. Contrary to the prevailing assumption that clustering algorithms constitute the primary constraint, we identify inconsistent word-level representations as the critical factor degrading lexicon quality. Method: Under ideal word-boundary conditions, we systematically evaluate diverse self-supervised speech features—continuous vs. discrete, frame-level vs. word-level—alongside clustering methods (K-means, hierarchical clustering, graph clustering) on English and Mandarin. Contribution/Results: We propose a novel DTW-based graph clustering method operating directly on continuous features, achieving state-of-the-art performance. Additionally, we introduce lightweight alternatives: cosine distance over averaged continuous features or edit distance over discrete sequences—balancing efficiency and effectiveness. Our findings fundamentally challenge prior assumptions, revealing that representation consistency—not clustering strategy—is the key bottleneck. This work establishes a new paradigm for unsupervised subword modeling.
📝 Abstract
Zero-resource word segmentation and clustering systems aim to tokenise speech into word-like units without access to text labels. Despite progress, the induced lexicons are still far from perfect. In an idealised setting with gold word boundaries, we ask whether performance is limited by the representation of word segments, or by the clustering methods that group them into word-like types. We combine a range of self-supervised speech features (continuous/discrete, frame/word-level) with different clustering methods (K-means, hierarchical, graph-based) on English and Mandarin data. The best system uses graph clustering with dynamic time warping on continuous features. Faster alternatives use graph clustering with cosine distance on averaged continuous features or edit distance on discrete unit sequences. Through controlled experiments that isolate either the representations or the clustering method, we demonstrate that representation variability across segments of the same word type -- rather than clustering -- is the primary factor limiting performance.