🤖 AI Summary
This work addresses the challenge of high-fidelity speech restoration from noisy utterances without access to target-speaker priors. We propose a two-stage cascaded framework: a speaker-agnostic generative speech restoration (GSR) model serves as the front-end to suppress noise and produce coarse reconstructions; a diffusion-based voice conversion (VC) model acts as the back-end, performing speaker-consistent, studio-quality refinement guided by clean speaker embeddings. To our knowledge, this is the first work to leverage voice conversion for same-speaker speech restoration—effectively decoupling universal denoising from personalized reconstruction and substantially improving noise robustness. Our method achieves state-of-the-art performance across multiple benchmark datasets in objective metrics (PESQ, STOI, ESTOI), with marked gains in speech naturalness and acoustic fidelity over existing approaches.
📝 Abstract
We propose a speech enhancement system that combines speaker-agnostic speech restoration with voice conversion (VC) to obtain a studio-level quality speech signal. While voice conversion models are typically used to change speaker characteristics, they can also serve as a means of speech restoration when the target speaker is the same as the source speaker. However, since VC models are vulnerable to noisy conditions, we have included a generative speech restoration (GSR) model at the front end of our proposed system. The GSR model performs noise suppression and restores speech damage incurred during that process without knowledge about the target speaker. The VC stage then uses guidance from clean speaker embeddings to further restore the output speech. By employing this two-stage approach, we have achieved speech quality objective metric scores comparable to state-of-the-art (SOTA) methods across multiple datasets.