๐ค AI Summary
Existing speaker diarization methods struggle with the challenges posed by open-domain scenarios such as films and TV shows, where speaker counts are large, audio-visual streams are often asynchronous, and environmental conditions are highly complex. This work proposes CineSRD, a novel framework that unifies visual, acoustic, and linguistic cues from video, speech, and subtitles for the first time. It leverages visual anchor clustering to register both on-screen and off-screen speakers and integrates an audio-language model to detect speaking turns. The authors construct and publicly release the first bilingual (ChineseโEnglish) speaker diarization benchmark dataset for cinematic content. Extensive experiments demonstrate that CineSRD achieves state-of-the-art performance on this new dataset and remains competitive on conventional benchmarks, confirming its robustness and generalization capability in open-domain settings.
๐ Abstract
Traditional speaker diarization systems have primarily focused on constrained scenarios such as meetings and interviews, where the number of speakers is limited and acoustic conditions are relatively clean. To explore open-world speaker diarization, we extend this task to the visual media domain, encompassing complex audiovisual programs such as films and TV series. This new setting introduces several challenges, including long-form video understanding, a large number of speakers, cross-modal asynchrony between audio and visual cues, and uncontrolled in-the-wild variability. To address these challenges, we propose Cinematic Speaker Registration & Diarization (CineSRD), a unified multimodal framework that leverages visual, acoustic, and linguistic cues from video, speech, and subtitles for speaker annotation. CineSRD first performs visual anchor clustering to register initial speakers and then integrates an audio language model for speaker turn detection, refining annotations and supplementing unregistered off-screen speakers. Furthermore, we construct and release a dedicated speaker diarization benchmark for visual media that includes Chinese and English programs. Experimental results demonstrate that CineSRD achieves superior performance on the proposed benchmark and competitive results on conventional datasets, validating its robustness and generalizability in open-world visual media settings.