🤖 AI Summary
To address the high computational complexity of the continuous wavelet transform (CWT) in acoustic recognition—particularly limiting its applicability to non-stationary audio signals—this paper proposes a learnable scalogram-structure optimization framework. It jointly optimizes wavelet kernel length and scalogram scale step via a differentiable parametric scaling strategy, enabling end-to-end lightweight CWT feature extraction. The method preserves CNN-based classification robustness while substantially reducing computational overhead: across multiple acoustic recognition tasks, it achieves an average 47% reduction in FLOPs, incurs <0.3% accuracy degradation, and accelerates inference by 2.1×. Its core contribution lies in the first formulation of scalogram structure as learnable parameters—breaking away from conventional fixed-scale designs—and establishing a new paradigm for efficient time-frequency feature extraction.
📝 Abstract
The Continuous Wavelet Transform (CWT) is an effective tool for feature extraction in acoustic recognition using Convolutional Neural Networks (CNNs), particularly when applied to non-stationary audio. However, its high computational cost poses a significant challenge, often leading researchers to prefer alternative methods such as the Short-Time Fourier Transform (STFT). To address this issue, this paper proposes a method to reduce the computational complexity of CWT by optimizing the length of the wavelet kernel and the hop size of the output scalogram. Experimental results demonstrate that the proposed approach significantly reduces computational cost while maintaining the robust performance of the trained model in acoustic recognition tasks.