Unsupervised lexicon learning from speech is limited by representations rather than clustering

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the fundamental bottlenecks limiting word segmentation and clustering performance in unsupervised spoken vocabulary learning. Contrary to the prevailing assumption that clustering algorithms constitute the primary constraint, we identify inconsistent word-level representations as the critical factor degrading lexicon quality. Method: Under ideal word-boundary conditions, we systematically evaluate diverse self-supervised speech features—continuous vs. discrete, frame-level vs. word-level—alongside clustering methods (K-means, hierarchical clustering, graph clustering) on English and Mandarin. Contribution/Results: We propose a novel DTW-based graph clustering method operating directly on continuous features, achieving state-of-the-art performance. Additionally, we introduce lightweight alternatives: cosine distance over averaged continuous features or edit distance over discrete sequences—balancing efficiency and effectiveness. Our findings fundamentally challenge prior assumptions, revealing that representation consistency—not clustering strategy—is the key bottleneck. This work establishes a new paradigm for unsupervised subword modeling.

Technology Category

Application Category

📝 Abstract
Zero-resource word segmentation and clustering systems aim to tokenise speech into word-like units without access to text labels. Despite progress, the induced lexicons are still far from perfect. In an idealised setting with gold word boundaries, we ask whether performance is limited by the representation of word segments, or by the clustering methods that group them into word-like types. We combine a range of self-supervised speech features (continuous/discrete, frame/word-level) with different clustering methods (K-means, hierarchical, graph-based) on English and Mandarin data. The best system uses graph clustering with dynamic time warping on continuous features. Faster alternatives use graph clustering with cosine distance on averaged continuous features or edit distance on discrete unit sequences. Through controlled experiments that isolate either the representations or the clustering method, we demonstrate that representation variability across segments of the same word type -- rather than clustering -- is the primary factor limiting performance.
Problem

Research questions and friction points this paper is trying to address.

Investigates limitations in unsupervised speech lexicon learning from representations
Evaluates whether word segmentation performance is limited by representations or clustering
Identifies representation variability as primary factor limiting word clustering accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph clustering with dynamic time warping on continuous features
Graph clustering with cosine distance on averaged continuous features
Graph clustering with edit distance on discrete unit sequences
🔎 Similar Papers
No similar papers found.
D
Danel Adendorff
Electrical and Electronic Engineering, Stellenbosch University, South Africa
S
Simon Malan
Electrical and Electronic Engineering, Stellenbosch University, South Africa
Herman Kamper
Herman Kamper
Stellenbosch University
Speech RecognitionMachine Learning