ContextIQ: A Multimodal Expert-Based Video Retrieval System for Contextual Advertising

📅 2024-10-29
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fine-grained video retrieval for contextual advertising faces dual challenges of content explosion and privacy constraints. To address this, we propose a multi-modal expert collaboration architecture that operates without joint training—decoupling the modeling of video, audio, subtitles, and semantic metadata (e.g., objects, actions, emotions), and fusing heterogeneous modal representations for high-precision zero-shot text-to-video retrieval. Our approach overcomes the limitations of conventional single-alignment models, significantly improving semantic alignment accuracy and content controllability while ensuring brand safety and regulatory compliance. Evaluated on multiple standard benchmarks, it achieves or surpasses state-of-the-art performance. Empirical results demonstrate an average 12.7% improvement in retrieval accuracy over unimodal baselines and enable millisecond-level brand-safety filtering. The method has been successfully deployed in an industrial-scale advertising system.

Technology Category

Application Category

📝 Abstract
Contextual advertising serves ads that are aligned to the content that the user is viewing. The rapid growth of video content on social platforms and streaming services, along with privacy concerns, has increased the need for contextual advertising. Placing the right ad in the right context creates a seamless and pleasant ad viewing experience, resulting in higher audience engagement and, ultimately, better ad monetization. From a technology standpoint, effective contextual advertising requires a video retrieval system capable of understanding complex video content at a very granular level. Current text-to-video retrieval models based on joint multimodal training demand large datasets and computational resources, limiting their practicality and lacking the key functionalities required for ad ecosystem integration. We introduce ContextIQ, a multimodal expert-based video retrieval system designed specifically for contextual advertising. ContextIQ utilizes modality-specific experts-video, audio, transcript (captions), and metadata such as objects, actions, emotion, etc.-to create semantically rich video representations. We show that our system, without joint training, achieves better or comparable results to state-of-the-art models and commercial solutions on multiple text-to-video retrieval benchmarks. Our ablation studies highlight the benefits of leveraging multiple modalities for enhanced video retrieval accuracy instead of using a vision-language model alone. Furthermore, we show how video retrieval systems such as ContextIQ can be used for contextual advertising in an ad ecosystem while also addressing concerns related to brand safety and filtering inappropriate content.
Problem

Research questions and friction points this paper is trying to address.

Develops video retrieval for contextual ads without joint training
Enhances ad relevance using multimodal video analysis
Addresses brand safety and content filtering in advertising
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal expert-based video retrieval system
Modality-specific experts for rich representations
No joint training, outperforms state-of-the-art models
🔎 Similar Papers
No similar papers found.
Ashutosh Chaubey
Ashutosh Chaubey
CS PhD, University of Southern California
Computer VisionMultimodal AISpeech Processing
A
Anoubhav Agarwaal
Anoki Inc.
S
Sartaki Sinha Roy
Anoki Inc.
A
Aayush Agarwal
Anoki Inc.
S
Susmita Ghose
Anoki Inc.