🤖 AI Summary
This work addresses the high computational cost of score-matching- and Schrödinger bridge-based generative methods in speech enhancement. We conduct the first systematic investigation of flow matching (FM) for this task, proposing three FM training objectives: velocity field modeling, direct prediction of the enhanced speech $x_1$, and preconditioned FM. To jointly optimize perceptual quality and signal fidelity, we design a composite loss function driven by PESQ and SI-SDR. Experiments on the DNS-Challenge dataset show that our method achieves +1.2 PESQ and +3.8 dB SI-SDR improvements over strong baselines, with ~40% faster training convergence than existing generative enhancers. Our core contributions are: (i) the first comprehensive evaluation of FM variants in speech enhancement; and (ii) a perception–signal co-optimization paradigm that unifies high-fidelity reconstruction with training efficiency.
📝 Abstract
Speech enhancement(SE) aims to recover clean speech from noisy recordings. Although generative approaches such as score matching and Schrodinger bridge have shown strong effectiveness, they are often computationally expensive. Flow matching offers a more efficient alternative by directly learning a velocity field that maps noise to data. In this work, we present a systematic study of flow matching for SE under three training objectives: velocity prediction, $x_1$ prediction, and preconditioned $x_1$ prediction. We analyze their impact on training dynamics and overall performance. Moreover, by introducing perceptual(PESQ) and signal-based(SI-SDR) objectives, we further enhance convergence efficiency and speech quality, yielding substantial improvements across evaluation metrics.