🤖 AI Summary
This work proposes DiscoPhon, the first unified multilingual benchmark for unsupervised phoneme discovery, covering 12 languages and requiring only 10 hours of unannotated speech to evaluate how well systems map discovered discrete units to ground-truth phoneme inventories. The approach leverages pretrained multilingual HuBERT and SpidR models to extract discrete units and aligns them with phonemes using one-to-one or many-to-one mapping strategies. Comprehensive evaluation across three dimensions—unit quality, identification accuracy, and segmentation fidelity—demonstrates that current models can effectively capture phonemic information, though performance varies significantly across languages. These findings underscore the value of DiscoPhon as a benchmark for assessing cross-lingual phonological modeling capabilities in unsupervised settings.
📝 Abstract
We introduce DiscoPhon, a multilingual benchmark for evaluating unsupervised phoneme discovery from discrete speech units. DiscoPhon covers 6 dev and 6 test languages, chosen to span a wide range of phonemic contrasts. Given only 10 hours of speech in a previously unseen language, systems must produce discrete units that are mapped to a predefined phoneme inventory, through either a many-to-one or a one-to-one assignment. The resulting sequences are evaluated for unit quality, recognition and segmentation. We provide four pretrained multilingual HuBERT and SpidR baselines, and show that phonemic information is available enough in current models for derived units to correlate well with phonemes, though with variations across languages.