Time2General: Learning Spatiotemporal Invariant Representations for Domain-Generalization Video Semantic Segmentation

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of inter-frame flickering and prediction inconsistency in domain-generalized video semantic segmentation, which arise from domain shifts and variations in temporal sampling. To tackle these issues, the authors propose Time2General, a novel framework that leverages a Spatio-Temporal Memory Decoder to aggregate multi-frame contextual information into a coherent spatio-temporal memory, enabling the decoding of temporally consistent per-frame masks without explicit correspondence propagation. The approach introduces three key innovations: a stability query mechanism, a mask temporal consistency loss, and a random training step-length strategy, collectively enhancing robustness to unseen domains and diverse frame rates while improving temporal stability. Extensive experiments on multiple driving-scene benchmarks demonstrate that Time2General outperforms existing methods in both cross-domain accuracy and temporal consistency, achieving a real-time inference speed of 18 FPS.

Technology Category

Application Category

📝 Abstract
Domain Generalized Video Semantic Segmentation (DGVSS) is trained on a single labeled driving domain and is directly deployed on unseen domains without target labels and test-time adaptation while maintaining temporally consistent predictions over video streams. In practice, both domain shift and temporal-sampling shift break correspondence-based propagation and fixed-stride temporal aggregation, causing severe frame-to-frame flicker even in label-stable regions. We propose Time2General, a DGVSS framework built on Stability Queries. Time2General introduces a Spatio-Temporal Memory Decoder that aggregates multi-frame context into a clip-level spatio-temporal memory and decodes temporally consistent per-frame masks without explicit correspondence propagation. To further suppress flicker and improve robustness to varying sampling rates, the Masked Temporal Consistency Loss is proposed to regularize temporal prediction discrepancies across different strides, and randomize training strides to expose the model to diverse temporal gaps. Extensive experiments on multiple driving benchmarks show that Time2General achieves a substantial improvement in cross-domain accuracy and temporal stability over prior DGSS and VSS baselines while running at up to 18 FPS. Code will be released after the review process.
Problem

Research questions and friction points this paper is trying to address.

Domain Generalization
Video Semantic Segmentation
Temporal Consistency
Domain Shift
Temporal Sampling Shift
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatio-Temporal Memory Decoder
Stability Queries
Masked Temporal Consistency Loss
Domain Generalized Video Semantic Segmentation
Temporal Flicker Suppression
🔎 Similar Papers
No similar papers found.
S
Siyu Chen
School of Computer Engineering, Jimei University, Xiamen, China
Ting Han
Ting Han
Sun Yat-sen University
point cloudremote sensing
H
Haoling Huang
School of System Science and Engineering, Sun Yat-Sen University, Guangzhou, China
C
Chaolei Wang
School of Geospatial Engineering and Science, Sun Yat-sen University, Zhuhai, China
C
Chengzheng Fu
College of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Nanjing, China
D
Duxin Zhu
School of Computer Engineering, Jimei University, Xiamen, China
G
Guorong Cai
School of Computer Engineering, Jimei University, Xiamen, China
J
Jinhe Su
School of Computer Engineering, Jimei University, Xiamen, China