Maestro-EVC: Controllable Emotional Voice Conversion Guided by References and Explicit Prosody

📅 2025-08-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing emotional voice conversion (EVC) methods struggle to disentangle speaker identity from fine-grained emotional dynamics—such as temporal prosodic variations—resulting in poor controllability and limited expressiveness. To address this, we propose an end-to-end disentangled framework: (1) independent reference signals are introduced to separately control linguistic content, speaker identity, and emotional style; (2) a temporal emotion encoder is designed to explicitly model the evolution of emotional states over time; and (3) explicit prosody modeling is integrated with prosody enhancement strategies to improve robustness across diverse prosodic scenarios. Experiments demonstrate that our method achieves significantly improved independent control over the three attributes—content, speaker, and emotion—while preserving high naturalness. It also enhances emotional expressiveness substantially, outperforming state-of-the-art approaches in both objective metrics and subjective listening tests.

Technology Category

Application Category

📝 Abstract
Emotional voice conversion (EVC) aims to modify the emotional style of speech while preserving its linguistic content. In practical EVC, controllability, the ability to independently control speaker identity and emotional style using distinct references, is crucial. However, existing methods often struggle to fully disentangle these attributes and lack the ability to model fine-grained emotional expressions such as temporal dynamics. We propose Maestro-EVC, a controllable EVC framework that enables independent control of content, speaker identity, and emotion by effectively disentangling each attribute from separate references. We further introduce a temporal emotion representation and an explicit prosody modeling with prosody augmentation to robustly capture and transfer the temporal dynamics of the target emotion, even under prosody-mismatched conditions. Experimental results confirm that Maestro-EVC achieves high-quality, controllable, and emotionally expressive speech synthesis.
Problem

Research questions and friction points this paper is trying to address.

Disentangle speaker identity and emotional style in voice conversion
Model fine-grained emotional expressions like temporal dynamics
Achieve controllable emotional voice conversion under prosody-mismatched conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentangles content, speaker, and emotion attributes
Uses temporal emotion representation for dynamics
Explicit prosody modeling with augmentation
🔎 Similar Papers
No similar papers found.
J
Jinsung Yoon
Graduate School of Artificial Intelligence, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
Wooyeol Jeong
Wooyeol Jeong
Pohang University of Science and Technology
speech synthesismulti-modal learninghuman-level interaction
J
Jio Gim
Dept. of Computer Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea
Young-Joo Suh
Young-Joo Suh
Postech
IoTAINext Generation Networks