Bridging Vision Language Models and Symbolic Grounding for Video Question Answering

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video question answering (VQA) suffers from limited spatiotemporal and causal reasoning capabilities and poor interpretability, as most existing vision-language models (VLMs) rely on shallow visual-linguistic correlations. To address this, we propose SG-VLM—a novel neuro-symbolic framework that modularly integrates a frozen VLM with symbolic scene graphs via prompt engineering and vision-based localization alignment. Our method comprises: (1) dynamic scene graph construction to explicitly model objects, relations, and events; (2) a graph-structured, multi-stage prompting and localization alignment mechanism; and (3) an end-to-end differentiable hybrid reasoning pipeline. Evaluated on three major benchmarks—NExT-QA, TVQA, and VideoQA—SG-VLM achieves state-of-the-art performance, significantly improving accuracy on complex spatiotemporal and causal reasoning tasks while enhancing decision interpretability. These results validate the effectiveness of leveraging symbolic priors to guide fine-grained VLM understanding.

Technology Category

Application Category

📝 Abstract
Video Question Answering (VQA) requires models to reason over spatial, temporal, and causal cues in videos. Recent vision language models (VLMs) achieve strong results but often rely on shallow correlations, leading to weak temporal grounding and limited interpretability. We study symbolic scene graphs (SGs) as intermediate grounding signals for VQA. SGs provide structured object-relation representations that complement VLMs holistic reasoning. We introduce SG-VLM, a modular framework that integrates frozen VLMs with scene graph grounding via prompting and visual localization. Across three benchmarks (NExT-QA, iVQA, ActivityNet-QA) and multiple VLMs (QwenVL, InternVL), SG-VLM improves causal and temporal reasoning and outperforms prior baselines, though gains over strong VLMs are limited. These findings highlight both the promise and current limitations of symbolic grounding, and offer guidance for future hybrid VLM-symbolic approaches in video understanding.
Problem

Research questions and friction points this paper is trying to address.

Integrating symbolic scene graphs with vision language models
Improving temporal and causal reasoning in video question answering
Enhancing interpretability and grounding in video understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates frozen VLMs with scene graphs
Uses prompting and visual localization techniques
Combines holistic reasoning with structured representations
🔎 Similar Papers
No similar papers found.