Improving Resource-Efficient Speech Enhancement via Neural Differentiable DSP Vocoder Refinement

📅 2025-08-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of deploying high-quality speech enhancement (SE) models on resource-constrained wearable devices (e.g., smart glasses), this paper proposes a resource-efficient end-to-end SE framework. The method employs a lightweight neural network to jointly predict spectral envelopes, fundamental frequency (F0), and periodicity features, which are then fed into a differentiable Differentiable Digital Signal Processing (DDSP) vocoder for joint feature-to-waveform optimization and end-to-end training. To further improve synthesis fidelity, multi-level optimization is introduced via short-time Fourier transform (STFT)-domain waveform reconstruction and adversarial loss—achieving substantial quality gains with negligible computational overhead. Experiments demonstrate significant improvements over strong baselines: +4% in Short-Time Objective Intelligibility (STOI) and +19% in Deep Noise Suppressor Mean Opinion Score (DNSMOS), while maintaining real-time inference capability. The framework thus achieves an effective balance between perceptual fidelity and computational efficiency, making it well-suited for embedded voice interaction applications.

Technology Category

Application Category

📝 Abstract
Deploying speech enhancement (SE) systems in wearable devices, such as smart glasses, is challenging due to the limited computational resources on the device. Although deep learning methods have achieved high-quality results, their computational cost limits their feasibility on embedded platforms. This work presents an efficient end-to-end SE framework that leverages a Differentiable Digital Signal Processing (DDSP) vocoder for high-quality speech synthesis. First, a compact neural network predicts enhanced acoustic features from noisy speech: spectral envelope, fundamental frequency (F0), and periodicity. These features are fed into the DDSP vocoder to synthesize the enhanced waveform. The system is trained end-to-end with STFT and adversarial losses, enabling direct optimization at the feature and waveform levels. Experimental results show that our method improves intelligibility and quality by 4% (STOI) and 19% (DNSMOS) over strong baselines without significantly increasing computation, making it well-suited for real-time applications.
Problem

Research questions and friction points this paper is trying to address.

Enhancing speech intelligibility and quality on resource-constrained wearable devices
Reducing computational cost of deep learning speech enhancement systems
Enabling real-time speech processing through efficient neural-DSP integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable DSP vocoder for efficient synthesis
Compact neural network predicts acoustic features
End-to-end training with STFT and adversarial losses
🔎 Similar Papers
No similar papers found.
H
Heitor R. Guimarães
INRS - EMT
Ke Tan
Ke Tan
Research Scientist, Meta Reality Labs
Speech EnhancementSpeech SeparationMicrophone Array ProcessingModel CompressionDeep Learning
J
Juan Azcarreta
Meta Reality Labs
J
Jesus Alvarez
Meta Reality Labs
P
Prabhav Agrawal
Meta AI
A
Ashutosh Pandey
Meta Reality Labs
Buye Xu
Buye Xu
Meta Reality Labs Research