🤖 AI Summary
Existing video object segmentation methods suffer significant performance degradation when handling scenarios involving drastic motion, occlusion, or multi-step semantic reasoning. This work proposes the first end-to-end unified framework that jointly models language-guided video reasoning and spatio-temporal segmentation. The approach introduces a Spatio-Temporal Fusion module to inject segmentation-aware features into the vision-language backbone and designs a Temporal Dynamic Anchor Updater to dynamically maintain temporally consistent anchor frames. Evaluated on multiple referring video object segmentation benchmarks, the method achieves state-of-the-art performance and demonstrates substantially improved generalization to complex referential expressions and reasoning-intensive video scenes.
📝 Abstract
Referring Video Object Segmentation (RVOS) aims to segment target objects in videos based on natural language descriptions. However, fixed keyframe-based approaches that couple a vision language model with a separate propagation module often fail to capture rapidly changing spatiotemporal dynamics and to handle queries requiring multi-step reasoning, leading to sharp performance drops on motion-intensive and reasoning-oriented videos beyond static RVOS benchmarks. To address these limitations, we propose VIRST (Video-Instructed Reasoning Assistant for Spatio-Temporal Segmentation), an end-to-end framework that unifies global video reasoning and pixel-level mask prediction within a single model. VIRST bridges semantic and segmentation representations through the Spatio-Temporal Fusion (STF), which fuses segmentation-aware video features into the vision-language backbone, and employs the Temporal Dynamic Anchor Updater to maintain temporally adjacent anchor frames that provide stable temporal cues under large motion, occlusion, and reappearance. This unified design achieves state-of-the-art results across diverse RVOS benchmarks under realistic and challenging conditions, demonstrating strong generalization to both referring and reasoning oriented settings. The code and checkpoints are available at https://github.com/AIDASLab/VIRST.