SST-EM: Advanced Metrics for Evaluating Semantic, Spatial and Temporal Aspects in Video Editing

📅 2025-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video evaluation methods struggle to accurately assess the quality of video edits designed for middle-school students—particularly with respect to semantic meaning, spatial localization, and temporal arrangement—and exhibit notable limitations in evaluating semantic coherence and visual fluency. To address this, we propose the first multidimensional evaluation framework specifically tailored for video editing quality. Our approach innovatively integrates semantic features (via vision-language models), spatial features (using YOLO-based object detection refined by large language models), and temporal features (leveraging Vision Transformer-based sequential modeling). Dynamic dimension weights are derived from human evaluations and regression analysis. The resulting weighted unified metric achieves significantly improved alignment with human preferences (Pearson *r* = 0.89) and outperforms baselines—including CLIPScore—by over 23% across multiple editing benchmarks. The code and models are publicly available.

Technology Category

Application Category

📝 Abstract
Video editing models have advanced significantly, but evaluating their performance remains challenging. Traditional metrics, such as CLIP text and image scores, often fall short: text scores are limited by inadequate training data and hierarchical dependencies, while image scores fail to assess temporal consistency. We present SST-EM (Semantic, Spatial, and Temporal Evaluation Metric), a novel evaluation framework that leverages modern Vision-Language Models (VLMs), Object Detection, and Temporal Consistency checks. SST-EM comprises four components: (1) semantic extraction from frames using a VLM, (2) primary object tracking with Object Detection, (3) focused object refinement via an LLM agent, and (4) temporal consistency assessment using a Vision Transformer (ViT). These components are integrated into a unified metric with weights derived from human evaluations and regression analysis. The name SST-EM reflects its focus on Semantic, Spatial, and Temporal aspects of video evaluation. SST-EM provides a comprehensive evaluation of semantic fidelity and temporal smoothness in video editing. The source code is available in the extbf{href{https://github.com/custommetrics-sst/SST_CustomEvaluationMetrics.git}{GitHub Repository}}.
Problem

Research questions and friction points this paper is trying to address.

Video Evaluation
Semantic Coherence
Visual Smoothness
Innovation

Methods, ideas, or system contributions that make the work stand out.

SST-EM
Visual Language Models
Temporal Coherence Analysis
Varun Biyyala
Varun Biyyala
Graduate Computer Science and Engineering Department, Katz School of Science and Health, Yeshiva University
B
Bharat Chanderprakash Kathuria
Graduate Computer Science and Engineering Department, Katz School of Science and Health, Yeshiva University
J
Jialu Li
Graduate Computer Science and Engineering Department, Katz School of Science and Health, Yeshiva University
Youshan Zhang
Youshan Zhang
Assistant Professor, Yeshiva University
Transfer LearningManifold LearningImage AnalysisShape AnalysisMultimodal Learning