🤖 AI Summary
Detecting misinformation in short-form videos (e.g., TikTok) is challenging due to their multimodal nature, high dynamism, and pervasive noise. Method: We propose the first verifiability detection framework for multilingual short videos—a purely reasoning-based, modular, end-to-end pipeline integrating speech transcription, OCR, object detection, deepfake identification, multimodal video summarization, and claim verification, with native cross-lingual support. Unlike conventional text-only or unimodal approaches, our method uniquely fuses heterogeneous multimodal signals to quantitatively assess a video’s “verifiability.” Contribution/Results: Evaluated on two manually annotated multilingual TikTok datasets, our system achieves a weighted F1-score of 70.3%, significantly improving fact-checkers’ initial triage efficiency. This demonstrates the feasibility and practicality of human-AI collaborative fact-checking for short-form video content.
📝 Abstract
Short-form video platforms like TikTok present unique challenges for misinformation detection due to their multimodal, dynamic, and noisy content. We present ShortCheck, a modular, inference-only pipeline with a user-friendly interface that automatically identifies checkworthy short-form videos to help human fact-checkers. The system integrates speech transcription, OCR, object and deepfake detection, video-to-text summarization, and claim verification. ShortCheck is validated by evaluating it on two manually annotated datasets with TikTok videos in a multilingual setting. The pipeline achieves promising results with F1-weighted score over 70%.