Reducing the Gap Between Pretrained Speech Enhancement and Recognition Models Using a Real Speech-Trained Bridging Module

📅 2025-01-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the degradation of automatic speech recognition (ASR) performance under real-world noise in single-channel speech enhancement, this paper proposes an ASR-driven end-to-end bridging module. Departing from conventional observation aggregation (OA), which relies on simulated noisy–clean speech pairs, our approach introduces DNSMOS—a no-reference, clean-label-free perceptual metric—as the sole supervision signal for training the bridging module. We further formulate a multi-task learning framework that jointly optimizes OA coefficients and ASR word error rate (WER) vectors. The resulting method enables end-to-end adaptive enhancement under realistic acoustic conditions. Evaluated on the CHiME-4 real-test set, it significantly outperforms simulation-trained baselines, achieving substantial relative WER reduction while markedly improving robustness and cross-scenario generalization. Key contributions are: (i) a DNSMOS-guided unsupervised bridging training paradigm; and (ii) a WER-vectorized multi-task optimization framework for speech enhancement.

Technology Category

Application Category

📝 Abstract
The information loss or distortion caused by single-channel speech enhancement (SE) harms the performance of automatic speech recognition (ASR). Observation addition (OA) is an effective post-processing method to improve ASR performance by balancing noisy and enhanced speech. Determining the OA coefficient is crucial. However, the currently supervised OA coefficient module, called the bridging module, only utilizes simulated noisy speech for training, which has a severe mismatch with real noisy speech. In this paper, we propose training strategies to train the bridging module with real noisy speech. First, DNSMOS is selected to evaluate the perceptual quality of real noisy speech with no need for the corresponding clean label to train the bridging module. Additional constraints during training are introduced to enhance the robustness of the bridging module further. Each utterance is evaluated by the ASR back-end using various OA coefficients to obtain the word error rates (WERs). The WERs are used to construct a multidimensional vector. This vector is introduced into the bridging module with multi-task learning and is used to determine the optimal OA coefficients. The experimental results on the CHiME-4 dataset show that the proposed methods all had significant improvement compared with the simulated data trained bridging module, especially under real evaluation sets.
Problem

Research questions and friction points this paper is trying to address.

Single-channel Speech Enhancement
Real Environment Noise
Automatic Speech Recognition Accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Single-Channel Speech Enhancement
DNSMOS Evaluation
Multi-Task Learning
🔎 Similar Papers
No similar papers found.