ParaGSE: Parallel Generative Speech Enhancement with Group-Vector-Quantization-based Neural Speech Codec

๐Ÿ“… 2026-02-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work proposes a parallel generative speech enhancement framework to address the common limitations of existing methodsโ€”namely high computational complexity, low generation efficiency, and suboptimal speech quality. The core innovation lies in a neural speech codec based on grouped vector quantization (GVQ), which maps noisy speech into semantically meaningful yet mutually independent tokens. Clean tokens are then directly predicted via parallel branches for efficient waveform reconstruction. By integrating conditional spectral feature modeling with a parallel generation architecture, the proposed method consistently outperforms both discriminative and generative baselines across various distortion conditions. Moreover, it achieves approximately 1.5ร— faster inference speed on CPU compared to sequential generative approaches, effectively balancing high-quality output with computational efficiency.

Technology Category

Application Category

๐Ÿ“ Abstract
Recently, generative speech enhancement has garnered considerable interest; however, existing approaches are hindered by excessive complexity, limited efficiency, and suboptimal speech quality. To overcome these challenges, this paper proposes a novel parallel generative speech enhancement (ParaGSE) framework that leverages a group vector quantization (GVQ)-based neural speech codec. The GVQ-based codec adopts separate VQs to produce mutually independent tokens, enabling efficient parallel token prediction in ParaGSE. Specifically, ParaGSE leverages the GVQ-based codec to encode degraded speech into distinct tokens, predicts the corresponding clean tokens through parallel branches conditioned on degraded spectral features, and ultimately reconstructs clean speech via the codec decoder. Experimental results demonstrate that ParaGSE consistently produces superior enhanced speech compared to both discriminative and generative baselines, under a wide range of distortions including noise, reverberation, band-limiting, and their mixtures. Furthermore, empowered by parallel computation in token prediction, ParaGSE attains about a 1.5-fold improvement in generation efficiency on CPU compared with serial generative speech enhancement approaches.
Problem

Research questions and friction points this paper is trying to address.

generative speech enhancement
computational complexity
efficiency
speech quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel Generative Speech Enhancement
Group Vector Quantization
Neural Speech Codec
Token-level Parallelism
Speech Enhancement
๐Ÿ”Ž Similar Papers
No similar papers found.
F
Fei Liu
National Engineering Research Center of Speech and Language Information Processing, University of Science and Technology of China, Hefei, China
Yang Ai
Yang Ai
Associate Researcher, University of Science and Technology of China
Speech SynthesisSpeech EnhancementSpeech CodingDeep Learning