Segment Anything for Video: A Comprehensive Review of Video Object Segmentation and Tracking from Past to Future

📅 2025-07-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video Object Segmentation and Tracking (VOST) suffers from poor temporal consistency, limited generalization, and low computational efficiency. To address these challenges, this paper presents a systematic survey of SAM- and SAM2-based VOST methods and proposes a foundation-model-driven paradigm: (1) a motion-aware memory selection mechanism to mitigate error accumulation; (2) trajectory-guided prompting to enhance temporal robustness; and (3) integration of streaming memory architecture, dynamic feature extraction, and motion prediction for efficient inference. Experiments demonstrate that the framework achieves superior trade-offs between accuracy and real-time performance. Furthermore, the study identifies critical bottlenecks—including memory redundancy, suboptimal prompt efficiency, and long-term error propagation—offering, for the first time, a structured technical roadmap and concrete future research directions for adapting SAM to VOST.

Technology Category

Application Category

📝 Abstract
Video Object Segmentation and Tracking (VOST) presents a complex yet critical challenge in computer vision, requiring robust integration of segmentation and tracking across temporally dynamic frames. Traditional methods have struggled with domain generalization, temporal consistency, and computational efficiency. The emergence of foundation models like the Segment Anything Model (SAM) and its successor, SAM2, has introduced a paradigm shift, enabling prompt-driven segmentation with strong generalization capabilities. Building upon these advances, this survey provides a comprehensive review of SAM/SAM2-based methods for VOST, structured along three temporal dimensions: past, present, and future. We examine strategies for retaining and updating historical information (past), approaches for extracting and optimizing discriminative features from the current frame (present), and motion prediction and trajectory estimation mechanisms for anticipating object dynamics in subsequent frames (future). In doing so, we highlight the evolution from early memory-based architectures to the streaming memory and real-time segmentation capabilities of SAM2. We also discuss recent innovations such as motion-aware memory selection and trajectory-guided prompting, which aim to enhance both accuracy and efficiency. Finally, we identify remaining challenges including memory redundancy, error accumulation, and prompt inefficiency, and suggest promising directions for future research. This survey offers a timely and structured overview of the field, aiming to guide researchers and practitioners in advancing the state of VOST through the lens of foundation models.
Problem

Research questions and friction points this paper is trying to address.

Addressing domain generalization and temporal consistency in video segmentation
Improving computational efficiency in video object tracking
Enhancing accuracy with motion-aware memory and trajectory-guided prompting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages SAM/SAM2 for prompt-driven segmentation
Integrates past, present, future temporal strategies
Uses motion-aware memory and trajectory prompting
🔎 Similar Papers
2024-07-03arXiv.orgCitations: 0
Guoping Xu
Guoping Xu
UTSW, WIT
Medical Image SegmentationDisease QuantificationComputer Vision
J
Jayaram K. Udupa
Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA
Y
Yajun Yu
The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
H
Hua-Chieh Shao
The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
S
Songlin Zhao
Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
W
Wei Liu
Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
Y
You Zhang
The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA