🤖 AI Summary
This paper introduces Audio-Visual Instance Segmentation (AVIS), a novel task aiming to jointly identify, pixel-wise segment, and temporally track sounding objects in audible videos. To formalize this fine-grained multimodal problem, we provide the first rigorous definition of AVIS. We further introduce AVISeg—the first large-scale long-video benchmark for AVIS—comprising 926 videos, 90K instance masks, and 26 semantic classes. Methodologically, we propose an end-to-end framework integrating frame-level sound source localization, object-context token compression, and windowed cross-modal attention to model long-range audio-visual dependencies and temporal instance associations. Our approach achieves significant improvements over existing methods on AVISeg. Moreover, empirical analysis reveals critical limitations of current multimodal foundation models in instance-level audio source localization and temporal reasoning—highlighting key challenges for future research.
📝 Abstract
In this paper, we propose a new multi-modal task, termed audio-visual instance segmentation (AVIS), which aims to simultaneously identify, segment and track individual sounding object instances in audible videos. To facilitate this research, we introduce a high-quality benchmark named AVISeg, containing over 90K instance masks from 26 semantic categories in 926 long videos. Additionally, we propose a strong baseline model for this task. Our model first localizes sound source within each frame, and condenses object-specific contexts into concise tokens. Then it builds long-range audio-visual dependencies between these tokens using window-based attention, and tracks sounding objects among the entire video sequences. Extensive experiments reveal that our method performs best on AVISeg, surpassing the existing methods from related tasks. We further conduct the evaluation on several multi-modal large models. Unfortunately, they exhibits subpar performance on instance-level sound source localization and temporal perception. We expect that AVIS will inspire the community towards a more comprehensive multi-modal understanding. Dataset and code is available at https://github.com/ruohaoguo/avis.