๐ค AI Summary
This work proposes a parallel generative speech enhancement framework to address the common limitations of existing methodsโnamely high computational complexity, low generation efficiency, and suboptimal speech quality. The core innovation lies in a neural speech codec based on grouped vector quantization (GVQ), which maps noisy speech into semantically meaningful yet mutually independent tokens. Clean tokens are then directly predicted via parallel branches for efficient waveform reconstruction. By integrating conditional spectral feature modeling with a parallel generation architecture, the proposed method consistently outperforms both discriminative and generative baselines across various distortion conditions. Moreover, it achieves approximately 1.5ร faster inference speed on CPU compared to sequential generative approaches, effectively balancing high-quality output with computational efficiency.
๐ Abstract
Recently, generative speech enhancement has garnered considerable interest; however, existing approaches are hindered by excessive complexity, limited efficiency, and suboptimal speech quality. To overcome these challenges, this paper proposes a novel parallel generative speech enhancement (ParaGSE) framework that leverages a group vector quantization (GVQ)-based neural speech codec. The GVQ-based codec adopts separate VQs to produce mutually independent tokens, enabling efficient parallel token prediction in ParaGSE. Specifically, ParaGSE leverages the GVQ-based codec to encode degraded speech into distinct tokens, predicts the corresponding clean tokens through parallel branches conditioned on degraded spectral features, and ultimately reconstructs clean speech via the codec decoder. Experimental results demonstrate that ParaGSE consistently produces superior enhanced speech compared to both discriminative and generative baselines, under a wide range of distortions including noise, reverberation, band-limiting, and their mixtures. Furthermore, empowered by parallel computation in token prediction, ParaGSE attains about a 1.5-fold improvement in generation efficiency on CPU compared with serial generative speech enhancement approaches.