🤖 AI Summary
Existing emotional voice conversion (EVC) methods rely on text transcriptions or phoneme-level alignments, limiting their ability to handle variable-length speech and constraining naturalness and expressiveness. To address this, we propose the first duration-flexible parallel EVC framework. Our approach introduces a style autoencoder to disentangle linguistic content from emotional attributes; integrates discrete speech units with self-supervised features to eliminate dependence on text alignment; and employs a hierarchical stylized encoder coupled with a diffusion-based spectrogram generator for end-to-end emotional modeling and arbitrary-duration control. Experiments demonstrate that our method achieves a mean opinion score (MOS) of 4.12, outperforming state-of-the-art baselines across all key metrics—character error rate (CER), F0 root-mean-square error (F0-RMSE), and utterance-level MOS (UTMOS)—with significant improvements in emotional fidelity, speech naturalness, and duration controllability.
📝 Abstract
Emotional voice conversion involves modifying the pitch, spectral envelope, and other acoustic characteristics of speech to match a desired emotional state while maintaining the speaker's identity. Recent advances in EVC involve simultaneously modeling pitch and duration by exploiting the potential of sequence-to-sequence models. In this study, we focus on parallel speech generation to increase the reliability and efficiency of conversion. We introduce a duration-flexible EVC (DurFlex-EVC) that integrates a style autoencoder and a unit aligner. The previous variable-duration parallel generation model required text-to-speech alignment. We consider self-supervised model representation and discrete speech units to be the core of our parallel generation. The style autoencoder promotes content style disentanglement by separating the source style of the input features and applying them with the target style. The unit aligner encodes unit-level features by modeling emotional context. Furthermore, we enhance the style of the features with a hierarchical stylize encoder and generate high-quality Mel-spectrograms with a diffusion-based generator. The effectiveness of the approach has been validated through subjective and objective evaluations and has been demonstrated to be superior to baseline models.