🤖 AI Summary
Video Object Segmentation and Tracking (VOST) suffers from poor temporal consistency, limited generalization, and low computational efficiency. To address these challenges, this paper presents a systematic survey of SAM- and SAM2-based VOST methods and proposes a foundation-model-driven paradigm: (1) a motion-aware memory selection mechanism to mitigate error accumulation; (2) trajectory-guided prompting to enhance temporal robustness; and (3) integration of streaming memory architecture, dynamic feature extraction, and motion prediction for efficient inference. Experiments demonstrate that the framework achieves superior trade-offs between accuracy and real-time performance. Furthermore, the study identifies critical bottlenecks—including memory redundancy, suboptimal prompt efficiency, and long-term error propagation—offering, for the first time, a structured technical roadmap and concrete future research directions for adapting SAM to VOST.
📝 Abstract
Video Object Segmentation and Tracking (VOST) presents a complex yet critical challenge in computer vision, requiring robust integration of segmentation and tracking across temporally dynamic frames. Traditional methods have struggled with domain generalization, temporal consistency, and computational efficiency. The emergence of foundation models like the Segment Anything Model (SAM) and its successor, SAM2, has introduced a paradigm shift, enabling prompt-driven segmentation with strong generalization capabilities. Building upon these advances, this survey provides a comprehensive review of SAM/SAM2-based methods for VOST, structured along three temporal dimensions: past, present, and future. We examine strategies for retaining and updating historical information (past), approaches for extracting and optimizing discriminative features from the current frame (present), and motion prediction and trajectory estimation mechanisms for anticipating object dynamics in subsequent frames (future). In doing so, we highlight the evolution from early memory-based architectures to the streaming memory and real-time segmentation capabilities of SAM2. We also discuss recent innovations such as motion-aware memory selection and trajectory-guided prompting, which aim to enhance both accuracy and efficiency. Finally, we identify remaining challenges including memory redundancy, error accumulation, and prompt inefficiency, and suggest promising directions for future research. This survey offers a timely and structured overview of the field, aiming to guide researchers and practitioners in advancing the state of VOST through the lens of foundation models.