🤖 AI Summary
Conventional time-frequency (T-F) representations exhibit limited discriminative power for underwater acoustic signal recognition under low signal-to-noise ratio (SNR) conditions.
Method: This paper proposes the Histogram-layer Time-Delay Neural Network (HL-TDNN) framework, which systematically evaluates and fuses multiple T-F features—including STFT, Constant-Q Transform (CQT), and Mel-spectrogram—at the feature level. A learnable histogram layer replaces conventional frame-level statistics to enhance robustness and interpretability, and ablation studies rigorously validate the efficacy of feature combinations.
Contribution/Results: We identify, for the first time, that specific multi-scale T-F fusion strategies—particularly CQT+Mel—significantly improve model discriminability. On a real-world underwater target recognition task, the optimal fusion yields a 5.2% absolute accuracy gain. These results demonstrate the critical role of acoustics-informed, synergistic T-F feature design for low-SNR modeling, establishing an interpretable, high-performance, and lightweight paradigm for weak underwater signal recognition.
📝 Abstract
While deep learning has reduced the prevalence of manual feature extraction, transformation of data via feature engineering remains essential for improving model performance, particularly for underwater acoustic signals. The methods by which audio signals are converted into time-frequency representations and the subsequent handling of these spectrograms can significantly impact performance. This work demonstrates the performance impact of using different combinations of time-frequency features in a histogram layer time delay neural network. An optimal set of features is identified with results indicating that specific feature combinations outperform single data features.