VOST-SGG: VLM-Aided One-Stage Spatio-Temporal Scene Graph Generation

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
ST-SGG models suffer from two key bottlenecks: (1) learnable queries exhibit impoverished semantics and are initialized independently of scene instances; (2) predicate classification relies solely on unimodal visual features. To address these, we propose a semantics-driven dual-source query initialization strategy and a multimodal feature bank that decouples *what* to attend to (semantic queries) from *where* to attend (spatial localization), integrating visual, textual, and geometric cues. Built upon the DETR architecture, our approach incorporates a vision-language model (VLM) to extract cross-modal representations, and introduces attention-guided query initialization and cross-modal predicate classification. Evaluated on Action Genome, our method achieves state-of-the-art performance, significantly improving relational detection accuracy and inference interpretability. The code is publicly available.

Technology Category

Application Category

📝 Abstract
Spatio-temporal scene graph generation (ST-SGG) aims to model objects and their evolving relationships across video frames, enabling interpretable representations for downstream reasoning tasks such as video captioning and visual question answering. Despite recent advancements in DETR-style single-stage ST-SGG models, they still suffer from several key limitations. First, while these models rely on attention-based learnable queries as a core component, these learnable queries are semantically uninformed and instance-agnostically initialized. Second, these models rely exclusively on unimodal visual features for predicate classification. To address these challenges, we propose VOST-SGG, a VLM-aided one-stage ST-SGG framework that integrates the common sense reasoning capabilities of vision-language models (VLMs) into the ST-SGG pipeline. First, we introduce the dual-source query initialization strategy that disentangles what to attend to from where to attend, enabling semantically grounded what-where reasoning. Furthermore, we propose a multi-modal feature bank that fuses visual, textual, and spatial cues derived from VLMs for improved predicate classification. Extensive experiments on the Action Genome dataset demonstrate that our approach achieves state-of-the-art performance, validating the effectiveness of integrating VLM-aided semantic priors and multi-modal features for ST-SGG. We will release the code at https://github.com/LUNAProject22/VOST.
Problem

Research questions and friction points this paper is trying to address.

Addresses semantically uninformed queries in ST-SGG models
Integrates VLM common sense for improved predicate classification
Enhances spatio-temporal scene graph generation with multi-modal features
Innovation

Methods, ideas, or system contributions that make the work stand out.

VLM-aided one-stage ST-SGG framework
Dual-source query initialization for semantic grounding
Multi-modal feature bank fusing visual, textual, spatial cues
🔎 Similar Papers
No similar papers found.