🤖 AI Summary
This work addresses the challenge of effectively separating impulsive acoustic events (e.g., knocks, alarms) from stationary background noise (e.g., HVAC hum, traffic rumble) in real-world soundscapes. We propose IS³, the first neural architecture specifically designed for impulse–stationary sound separation. IS³ integrates a lightweight deep neural network with a learnable deep filtering mechanism to achieve end-to-end, data-driven component disentanglement. To enhance generalizability across diverse acoustic sources, we introduce an efficient synthetic data generation pipeline that supports multi-source mixing and realistic spectral-temporal characteristics. Quantitative evaluation demonstrates that IS³ significantly outperforms conventional harmonic–percussive separation (HPS) and wavelet-based filtering methods across standard metrics (e.g., SI-SNR, SDR). These results validate the efficacy of learned separation paradigms in complex, non-stationary acoustic environments. The framework provides a robust foundation for downstream audio applications, including speech enhancement, adaptive noise suppression, and acoustic event detection.
📝 Abstract
We are interested in audio systems capable of performing a differentiated processing of stationary backgrounds and isolated acoustic events within an acoustic scene, whether for applying specific processing methods to each part or for focusing solely on one while ignoring the other. Such systems have applications in real-world scenarios, including robust adaptive audio rendering systems (e.g., EQ or compression), plosive attenuation in voice mixing, noise suppression or reduction, robust acoustic event classification or even bioacoustics. To this end, we introduce IS${}^3$, a neural network designed for Impulsive--Stationary Sound Separation, that isolates impulsive acoustic events from the stationary background using a deep filtering approach, that can act as a pre-processing stage for the above-mentioned tasks. To ensure optimal training, we propose a sophisticated data generation pipeline that curates and adapts existing datasets for this task. We demonstrate that a learning-based approach, build on a relatively lightweight neural architecture and trained with well-designed and varied data, is successful in this previously unaddressed task, outperforming the Harmonic--Percussive Sound Separation masking method, adapted from music signal processing research, and wavelet filtering on objective separation metrics.