What's Making That Sound Right Now? Video-centric Audio-Visual Localization

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing audio-visual localization methods are limited to image-level audio-video association, failing to model temporal dynamics and assuming that sound sources are always visible, single, and within the frame—severely hindering generalization to complex scenarios. This paper introduces the first video-centric fine-grained sound source localization framework. We propose AVATAR, the first benchmark systematically evaluating video-level audio-video temporal alignment. We further design TAVLO, a video-centered model that fuses frame-level audio features with visual motion representations and incorporates a temporal attention mechanism for millisecond-precise audio-video alignment. Experiments demonstrate that TAVLO significantly outperforms image-level methods across challenging settings—including single-source, reverberant, multi-entity, and out-of-frame sound source scenarios. Our results validate the critical role of explicit temporal dynamic modeling in audio-visual localization and establish a new video-centric paradigm.

Technology Category

Application Category

📝 Abstract
Audio-Visual Localization (AVL) aims to identify sound-emitting sources within a visual scene. However, existing studies focus on image-level audio-visual associations, failing to capture temporal dynamics. Moreover, they assume simplified scenarios where sound sources are always visible and involve only a single object. To address these limitations, we propose AVATAR, a video-centric AVL benchmark that incorporates high-resolution temporal information. AVATAR introduces four distinct scenarios -- Single-sound, Mixed-sound, Multi-entity, and Off-screen -- enabling a more comprehensive evaluation of AVL models. Additionally, we present TAVLO, a novel video-centric AVL model that explicitly integrates temporal information. Experimental results show that conventional methods struggle to track temporal variations due to their reliance on global audio features and frame-level mappings. In contrast, TAVLO achieves robust and precise audio-visual alignment by leveraging high-resolution temporal modeling. Our work empirically demonstrates the importance of temporal dynamics in AVL and establishes a new standard for video-centric audio-visual localization.
Problem

Research questions and friction points this paper is trying to address.

Identifying sound sources in videos with temporal dynamics
Addressing limitations of single-object and visible-source assumptions
Enhancing AVL models with high-resolution temporal information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Video-centric AVL benchmark with high-resolution temporal info
TAVLO model integrates temporal information explicitly
Leverages high-resolution temporal modeling for precise alignment
🔎 Similar Papers
No similar papers found.