🤖 AI Summary
This study addresses the challenge of underwater acoustic target recognition, which is severely constrained by the scarcity of labeled data and thus ill-suited for conventional supervised learning. The work presents the first systematic evaluation of cross-domain pretrained audio models for transfer learning in this domain, revealing that although their frozen embeddings lack explicit structural priors, they can effectively disentangle vessel-type semantic features through lightweight linear probing. This approach substantially suppresses recording-specific artifacts and achieves high-accuracy vessel classification with minimal labeling effort, significantly reducing reliance on large-scale, high-quality annotated underwater recordings. The findings establish a new paradigm for low-resource acoustic perception, demonstrating that powerful generic audio representations can be efficiently adapted to specialized underwater tasks without extensive fine-tuning or abundant labeled data.
📝 Abstract
Increasing levels of anthropogenic noise from ships contribute significantly to underwater sound pollution, posing risks to marine ecosystems. This makes monitoring crucial to understand and quantify the impact of the ship radiated noise. Passive Acoustic Monitoring (PAM) systems are widely deployed for this purpose, generating years of underwater recordings across diverse soundscapes. Manual analysis of such large-scale data is impractical, motivating the need for automated approaches based on machine learning. Recent advances in automatic Underwater Acoustic Target Recognition (UATR) have largely relied on supervised learning, which is constrained by the scarcity of labeled data. Transfer Learning (TL) offers a promising alternative to mitigate this limitation. In this work, we conduct the first empirical comparative study of transfer learning for UATR, evaluating multiple pretrained audio models originating from diverse audio domains. The pretrained model weights are frozen, and the resulting embeddings are analyzed through classification, clustering, and similarity-based evaluations. The analysis shows that the geometrical structure of the embedding space is largely dominated by recording-specific characteristics. However, a simple linear probe can effectively suppress this recording-specific information and isolate ship-type features from these embeddings. As a result, linear probing enables effective automatic UATR using pretrained audio models at low computational cost, significantly reducing the need for a large amounts of high-quality labeled ship recordings.