🤖 AI Summary
Weakly supervised audio-visual video parsing (AVVP) faces a fundamental trade-off between segment-level localization and event-level classification, primarily due to sparse supervision and multimodal noise. To address this, we propose AV-Mamba, a pseudo-label-augmented audio-visual Mamba network. Our method innovatively leverages cross-modal random composition of pseudo-labels for data augmentation, enhancing fine-grained segment discrimination. Concurrently, it exploits the Mamba architecture to model long-range temporal dependencies, thereby improving segment-aware representation learning and suppressing inter-modal noise. Evaluated on the LLP benchmark, AV-Mamba achieves state-of-the-art performance: +2.1% mAP for visual segment-level detection and +1.2% mAP for audio segment-level detection. Notably, it is the first weakly supervised approach to simultaneously optimize both segment-level localization accuracy and event-level recognition precision.
📝 Abstract
The weakly-supervised audio-visual video parsing (AVVP) aims to predict all modality-specific events and locate their temporal boundaries. Despite significant progress, due to the limitations of the weakly-supervised and the deficiencies of the model architecture, existing methods are lacking in simultaneously improving both the segment-level prediction and the event-level prediction. In this work, we propose a audio-visual Mamba network with pseudo labeling aUGmentation (MUG) for emphasising the uniqueness of each segment and excluding the noise interference from the alternate modalities. Specifically, we annotate some of the pseudo-labels based on previous work. Using unimodal pseudo-labels, we perform cross-modal random combinations to generate new data, which can enhance the model's ability to parse various segment-level event combinations. For feature processing and interaction, we employ a audio-visual mamba network. The AV-Mamba enhances the ability to perceive different segments and excludes additional modal noise while sharing similar modal information. Our extensive experiments demonstrate that MUG improves state-of-the-art results on LLP dataset in all metrics (e.g,, gains of 2.1% and 1.2% in terms of visual Segment-level and audio Segment-level metrics). Our code is available at https://github.com/WangLY136/MUG.