MUG: Pseudo Labeling Augmented Audio-Visual Mamba Network for Audio-Visual Video Parsing

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Weakly supervised audio-visual video parsing (AVVP) faces a fundamental trade-off between segment-level localization and event-level classification, primarily due to sparse supervision and multimodal noise. To address this, we propose AV-Mamba, a pseudo-label-augmented audio-visual Mamba network. Our method innovatively leverages cross-modal random composition of pseudo-labels for data augmentation, enhancing fine-grained segment discrimination. Concurrently, it exploits the Mamba architecture to model long-range temporal dependencies, thereby improving segment-aware representation learning and suppressing inter-modal noise. Evaluated on the LLP benchmark, AV-Mamba achieves state-of-the-art performance: +2.1% mAP for visual segment-level detection and +1.2% mAP for audio segment-level detection. Notably, it is the first weakly supervised approach to simultaneously optimize both segment-level localization accuracy and event-level recognition precision.

Technology Category

Application Category

📝 Abstract
The weakly-supervised audio-visual video parsing (AVVP) aims to predict all modality-specific events and locate their temporal boundaries. Despite significant progress, due to the limitations of the weakly-supervised and the deficiencies of the model architecture, existing methods are lacking in simultaneously improving both the segment-level prediction and the event-level prediction. In this work, we propose a audio-visual Mamba network with pseudo labeling aUGmentation (MUG) for emphasising the uniqueness of each segment and excluding the noise interference from the alternate modalities. Specifically, we annotate some of the pseudo-labels based on previous work. Using unimodal pseudo-labels, we perform cross-modal random combinations to generate new data, which can enhance the model's ability to parse various segment-level event combinations. For feature processing and interaction, we employ a audio-visual mamba network. The AV-Mamba enhances the ability to perceive different segments and excludes additional modal noise while sharing similar modal information. Our extensive experiments demonstrate that MUG improves state-of-the-art results on LLP dataset in all metrics (e.g,, gains of 2.1% and 1.2% in terms of visual Segment-level and audio Segment-level metrics). Our code is available at https://github.com/WangLY136/MUG.
Problem

Research questions and friction points this paper is trying to address.

Improving segment-level and event-level prediction in AVVP
Reducing noise interference from alternate modalities
Enhancing parsing of segment-level event combinations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pseudo-labeling augmented cross-modal data generation
Audio-visual Mamba network for feature processing
Cross-modal random combinations for segment enhancement
🔎 Similar Papers
2024-02-20International Conference on Machine LearningCitations: 30
L
Langyu Wang
Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China
Bingke Zhu
Bingke Zhu
Institute of Automation,Chinese Academy of Science
Y
Yingying Chen
Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China
Yiyuan Zhang
Yiyuan Zhang
MMLab, The Chinese University of HongKong
Multimodal RepresentationComputer vision
M
Ming Tang
Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China
J
Jinqiao Wang
Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China