🤖 AI Summary
To address the poor robustness of voice activity detection (VAD) on AIoT devices (e.g., smart glasses, earphones) under low signal-to-noise ratio (SNR) and complex noise conditions, this paper proposes a lightweight noise-robust VAD framework. The method enhances robustness without increasing model parameters or requiring fine-tuning, by synergistically integrating adaptive signal preprocessing (including noise suppression), a compact neural network backbone (<100K parameters), and a temporal post-processing module. Experimental results across diverse real-world noise scenarios (SNR = 0–10 dB) demonstrate that the proposed approach reduces average false-alarm rate by 32.7% and miss-detection rate by 28.4% compared to state-of-the-art lightweight VAD models, while also improving performance on clean speech. The solution satisfies strict real-time latency and edge-deployment constraints, making it highly practical for resource-constrained AIoT applications.
📝 Abstract
Voice Activity Detection (VAD) in the presence of background noise remains a challenging problem in speech processing. Accurate VAD is essential in automatic speech recognition, voice-to-text, conversational agents, etc, where noise can severely degrade the performance. A modern application includes the voice assistant, specially mounted on Artificial Intelligence of Things (AIoT) devices such as cell phones, smart glasses, earbuds, etc, where the voice signal includes background noise. Therefore, VAD modules must remain light-weight due to their practical on-device limitation. The existing models often struggle with low signal-to-noise ratios across diverse acoustic environments. A simple VAD often detects human voice in a clean environment, but struggles to detect the human voice in noisy conditions. We propose a noise-robust VAD that comprises a light-weight VAD, with data pre-processing and post-processing added modules to handle the background noise. This approach significantly enhances the VAD accuracy in noisy environments and requires neither a larger model, nor fine-tuning. Experimental results demonstrate that our approach achieves a notable improvement compared to baselines, particularly in environments with high background noise interference. This modified VAD additionally improving clean speech detection.