🤖 AI Summary
Extracting key procedural steps and generating summaries from instructional videos remains challenging due to the need for fine-grained temporal localization and multimodal alignment.
Method: We propose a hierarchical dual-granularity modeling framework that jointly encodes subtitle-level local semantics and video-level task instructions via a novel multi-granularity attention mechanism; leverages user replay counts as weak supervision to identify critical step segments; and introduces the first multimodally aligned, step-annotated dataset—WikiHow/EHow—curated from instructional video sources. Our approach integrates hierarchical attention, multimodal alignment modeling, behavior-driven weak supervision, and cross-dataset transfer training.
Results: Our method achieves significant improvements over state-of-the-art methods on TVSum, BLiSS, Mr.HiSum, and WikiHow, attaining higher F1-scores and rank correlation. Evaluation on the new WikiHow/EHow dataset yields an average 4.2% gain in downstream performance.
📝 Abstract
Video summarization creates an abridged version (i.e., a summary) that provides a quick overview of the video while retaining pertinent information. In this work, we focus on summarizing instructional videos and propose a method for breaking down a video into meaningful segments, each corresponding to essential steps in the video. We propose extbf{HierSum}, a hierarchical approach that integrates fine-grained local cues from subtitles with global contextual information provided by video-level instructions. Our approach utilizes the ``most replayed"statistic as a supervisory signal to identify critical segments, thereby improving the effectiveness of the summary. We evaluate on benchmark datasets such as TVSum, BLiSS, Mr.HiSum, and the WikiHow test set, and show that HierSum consistently outperforms existing methods in key metrics such as F1-score and rank correlation. We also curate a new multi-modal dataset using WikiHow and EHow videos and associated articles containing step-by-step instructions. Through extensive ablation studies, we demonstrate that training on this dataset significantly enhances summarization on the target datasets.