π€ AI Summary
Existing AI-generated video descriptions often fail to meet accessibility needs of blind and low-vision (BLV) users due to limited quality and representativeness of training data.
Method: This paper introduces the first end-to-end framework for generating accessible video descriptions, uniquely integrating multimodal large language models (MLLMs) with established video accessibility guidelines. It establishes a human-in-the-loop annotation pipeline and designs a BLV-centered evaluation protocol encompassing clarity, accuracy, objectivity, descriptiveness, and user satisfaction.
Contributions/Results: (1) We release VideoA11y-40Kβthe largest high-quality dataset of accessible video descriptions to date, comprising 40K expert-annotated samples; (2) Our method achieves performance on par with professional human annotators across all five evaluation dimensions, significantly outperforming novice annotators; (3) Fine-tuned MLLMs attain state-of-the-art results on both standard and accessibility-specific metrics, demonstrating robust generalization and practical utility for inclusive video understanding.
π Abstract
Video descriptions are crucial for blind and low vision (BLV) users to access visual content. However, current artificial intelligence models for generating descriptions often fall short due to limitations in the quality of human annotations within training datasets, resulting in descriptions that do not fully meet BLV users' needs. To address this gap, we introduce VideoA11y, an approach that leverages multimodal large language models (MLLMs) and video accessibility guidelines to generate descriptions tailored for BLV individuals. Using this method, we have curated VideoA11y-40K, the largest and most comprehensive dataset of 40,000 videos described for BLV users. Rigorous experiments across 15 video categories, involving 347 sighted participants, 40 BLV participants, and seven professional describers, showed that VideoA11y descriptions outperform novice human annotations and are comparable to trained human annotations in clarity, accuracy, objectivity, descriptiveness, and user satisfaction. We evaluated models on VideoA11y-40K using both standard and custom metrics, demonstrating that MLLMs fine-tuned on this dataset produce high-quality accessible descriptions. Code and dataset are available at https://people-robots.github.io/VideoA11y.