π€ AI Summary
To address the challenges of heavy reliance on labeled data, limited feature representation, and neglect of temporal dynamics in avian vocalization classification, this paper proposes ARIONetβthe first framework integrating self-supervised contrastive learning with future audio frame prediction. ARIONet employs a multi-feature-fused Transformer encoder to learn high-quality acoustic representations without large-scale annotated data. By jointly optimizing contrastive loss and frame prediction loss, and incorporating targeted data augmentation, the model significantly enhances both species identification accuracy and temporal modeling capability. Evaluated on four public avian vocalization datasets, ARIONet achieves up to 98.41% classification accuracy (F1-score: 97.84%) and 95% similarity in future frame reconstruction, with substantially reduced prediction error. This work establishes a scalable, low-resource paradigm for bioacoustic analysis and ecological monitoring.
π Abstract
Automated birdsong classification is essential for advancing ecological monitoring and biodiversity studies. Despite recent progress, existing methods often depend heavily on labeled data, use limited feature representations, and overlook temporal dynamics essential for accurate species identification. In this work, we propose a self-supervised contrastive network, ARIONet (Acoustic Representation for Interframe Objective Network), that jointly optimizes contrastive classification and future frame prediction using augmented audio representations. The model simultaneously integrates multiple complementary audio features within a transformer-based encoder model. Our framework is designed with two key objectives: (1) to learn discriminative species-specific representations for contrastive learning through maximizing similarity between augmented views of the same audio segment while pushing apart different samples, and (2) to model temporal dynamics by predicting future audio frames, both without requiring large-scale annotations. We validate our framework on four diverse birdsong datasets, including the British Birdsong Dataset, Bird Song Dataset, and two extended Xeno-Canto subsets (A-M and N-Z). Our method consistently outperforms existing baselines and achieves classification accuracies of 98.41%, 93.07%, 91.89%, and 91.58%, and F1-scores of 97.84%, 94.10%, 91.29%, and 90.94%, respectively. Furthermore, it demonstrates low mean absolute errors and high cosine similarity, up to 95%, in future frame prediction tasks. Extensive experiments further confirm the effectiveness of our self-supervised learning strategy in capturing complex acoustic patterns and temporal dependencies, as well as its potential for real-world applicability in ecological conservation and monitoring.