🤖 AI Summary
This study investigates whether individual dimensions in the representations of self-supervised speech models (specifically WavLM) encode distinct speaker-related acoustic attributes, such as pitch, gender, intensity, noise level, and the second formant. By applying principal component analysis (PCA) to disentangle model features, the authors systematically identify independent dimensions that exhibit strong correlations with these acoustic properties, establishing for the first time a clear correspondence between specific latent dimensions and interpretable speaker characteristics. Further experiments demonstrate that manipulating these dominant dimensions enables effective control over the associated speaker attributes in speech synthesis, thereby confirming both their controllability and practical utility in downstream applications.
📝 Abstract
How do speech models trained through self-supervised learning structure their representations? Previous studies have looked at how information is encoded in feature vectors across different layers. But few studies have considered whether speech characteristics are captured within individual dimensions of SSL features. In this paper we specifically look at speaker information using PCA on utterance-averaged representations. Using WavLM, we find that the principal dimension that explains most variance encodes pitch and associated characteristics like gender. Other individual principal dimensions correlate with intensity, noise levels, the second formant, and higher frequency characteristics. Finally, in synthesis experiments we show that most characteristics can be controlled by changing the corresponding dimensions. This provides a simple method to control characteristics of the output voice in synthesis applications.