Visual Content Detection in Educational Videos with Transfer Learning and Dataset Enrichment

📅 2025-06-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Non-structural visual elements—such as hand-drawn diagrams, tables, and illustrations—in educational videos are critical for learning, yet their automatic detection remains challenging due to morphological irregularity, ambiguous text–graphic boundaries, and severe scarcity of labeled data. To address these issues, we propose a YOLO-based transfer learning framework specifically designed for instructional video analysis. Our approach integrates multi-source benchmark data for joint pre-training, domain-adaptive fine-tuning, and a novel semi-supervised pipeline for automatic annotation of mixed text–graphic objects. Key contributions include: (1) the first publicly available, fine-grained annotated benchmark dataset of educational video frames; (2) a detection paradigm tailored to non-standard visual objects; (3) a 23.6% improvement in mean Average Precision (mAP), significantly enhancing robustness in detecting unstructured visual elements; and (4) full open-sourcing of datasets and code to enable reproducible research in educational video understanding.

Technology Category

Application Category

📝 Abstract
Video is transforming education with online courses and recorded lectures supplementing and replacing classroom teaching. Recent research has focused on enhancing information retrieval for video lectures with advanced navigation, searchability, summarization, as well as question answering chatbots. Visual elements like tables, charts, and illustrations are central to comprehension, retention, and data presentation in lecture videos, yet their full potential for improving access to video content remains underutilized. A major factor is that accurate automatic detection of visual elements in a lecture video is challenging; reasons include i) most visual elements, such as charts, graphs, tables, and illustrations, are artificially created and lack any standard structure, and ii) coherent visual objects may lack clear boundaries and may be composed of connected text and visual components. Despite advancements in deep learning based object detection, current models do not yield satisfactory performance due to the unique nature of visual content in lectures and scarcity of annotated datasets. This paper reports on a transfer learning approach for detecting visual elements in lecture video frames. A suite of state of the art object detection models were evaluated for their performance on lecture video datasets. YOLO emerged as the most promising model for this task. Subsequently YOLO was optimized for lecture video object detection with training on multiple benchmark datasets and deploying a semi-supervised auto labeling strategy. Results evaluate the success of this approach, also in developing a general solution to the problem of object detection in lecture videos. Paper contributions include a publicly released benchmark of annotated lecture video frames, along with the source code to facilitate future research.
Problem

Research questions and friction points this paper is trying to address.

Detecting visual elements in educational videos accurately
Overcoming challenges in lecture video object detection
Improving dataset scarcity for visual content detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transfer learning for visual element detection
YOLO model optimized for lecture videos
Semi-supervised auto labeling strategy
🔎 Similar Papers
D
Dipayan Biswas
Department of Computer Science, University of Houston, Houston, USA
Shishir Shah
Shishir Shah
University of Houston, Department of Computer Science
Computer VisionPattern RecognitionImage ProcessingBiometricsSurveillance
J
Jaspal Subhlok
Department of Computer Science, University of Houston, Houston, USA