🤖 AI Summary
This work investigates the feasibility of authorship attribution models for speaker identification in speech transcription texts. Unlike written text, transcriptions lack punctuation and capitalization but contain speech-specific patterns such as fillers and backchannels. To address this, we introduce the first benchmark for speaker attribution in manually transcribed dialogues and propose a topic-controlled verification paradigm to mitigate topic confounding bias. Methodologically, we integrate contextual language models (BERT/RoBERTa) with n-gram and stylometric features, incorporating transcription-style analysis and domain-adaptive fine-tuning on speech-transcribed text. Experiments show that general-purpose models achieve moderate speaker discrimination under relaxed settings, but performance drops substantially under topic control. Crucially, fine-tuning on speech-transcribed corpora significantly improves speaker identification accuracy, demonstrating that speech-style features—e.g., disfluencies and interactional cues—are discriminative and recoverable from transcriptions.
📝 Abstract
Abstract Authorship verification is the task of determining if two distinct writing samples share the same author and is typically concerned with the attribution of written text. In this paper, we explore the attribution of transcribed speech, which poses novel challenges. The main challenge is that many stylistic features, such as punctuation and capitalization, are not informative in this setting. On the other hand, transcribed speech exhibits other patterns, such as filler words and backchannels (e.g., um, uh-huh), which may be characteristic of different speakers. We propose a new benchmark for speaker attribution focused on human-transcribed conversational speech transcripts. To limit spurious associations of speakers with topic, we employ both conversation prompts and speakers participating in the same conversation to construct verification trials of varying difficulties. We establish the state of the art on this new benchmark by comparing a suite of neural and non-neural baselines, finding that although written text attribution models achieve surprisingly good performance in certain settings, they perform markedly worse as conversational topic is increasingly controlled. We present analyses of the impact of transcription style on performance as well as the ability of fine-tuning on speech transcripts to improve performance.1