GAID: Frame-Level Gated Audio-Visual Integration with Directional Perturbation for Text-Video Retrieval

📅 2025-08-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-video retrieval methods predominantly rely on visual modalities, either neglecting audio semantics or employing coarse-grained multimodal fusion, leading to suboptimal cross-modal alignment. To address this, we propose a fine-grained audio-visual fusion framework coupled with directional semantic perturbation. First, we design a text-guided, frame-level gated fusion mechanism that enables dynamic and adaptive integration of auditory and visual features. Second, we introduce structural-aware directional perturbation in the text embedding space to enhance textual discriminability and cross-modal robustness. Our method is trained end-to-end in a single stage without increasing inference overhead. Extensive experiments demonstrate state-of-the-art performance across four major benchmarks—MSR-VTT, DiDeMo, LSMDC, and VATEX—achieving significant improvements in both retrieval accuracy and efficiency.

Technology Category

Application Category

📝 Abstract
Text-to-video retrieval requires precise alignment between language and temporally rich video signals. Existing methods predominantly exploit visual cues and often overlook complementary audio semantics or adopt coarse fusion strategies, leading to suboptimal multimodal representations. We present GAID, a framework that jointly address this gap via two key components: (i) a Frame-level Gated Fusion (FGF) that adaptively integrates audio and visual features under textual guidance, enabling fine-grained temporal alignment; and (ii) a Directional Adaptive Semantic Perturbation (DASP) that injects structure-aware perturbations into text embeddings, enhancing robustness and discrimination without incurring multi-pass inference. These modules complement each other -- fusion reduces modality gaps while perturbation regularizes cross-modal matching -- yielding more stable and expressive representations. Extensive experiments on MSR-VTT, DiDeMo, LSMDC, and VATEX show consistent state-of-the-art results across all retrieval metrics with notable efficiency gains. Our code is available at https://github.com/YangBowenn/GAID.
Problem

Research questions and friction points this paper is trying to address.

Improves text-video retrieval via audio-visual fusion
Enhances multimodal representation with adaptive perturbations
Addresses coarse fusion strategies in existing methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Frame-level Gated Fusion for audio-visual integration
Directional Adaptive Semantic Perturbation for robustness
Fine-grained temporal alignment under textual guidance
🔎 Similar Papers
No similar papers found.
B
Bowen Yang
Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China
Yun Cao
Yun Cao
researcher, tencent
CVGANs
C
Chen He
Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China
X
Xiaosu Su
Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China