Descriptor:: Extended-Length Audio Dataset for Synthetic Voice Detection and Speaker Recognition (ELAD-SVDSR)

๐Ÿ“… 2025-09-30
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
A critical bottleneck in deepfake audio research is the absence of long-duration, high-fidelity real-world speech datasets supporting both synthetic speech detection and speaker identification. To address this, we introduce the first extended-duration forensic audio dataset: it comprises 45 minutes of high-fidelity read speech per speaker from 36 native speakers, captured via multi-microphone synchronized recording in controlled acoustic environments and rigorously anonymized. The dataset further integrates deepfake samples generated by 20 state-of-the-art TTS and voice conversion models. Uniquely, it enables long-context modeling, fine-grained acoustic analysis, and end-to-end forensic evaluationโ€”all supported by de-identified metadata. Experimental results demonstrate substantial improvements in detection robustness and speaker identification accuracy over prior benchmarks. This dataset establishes a foundational benchmark for voice biometric security and deepfake audio defense.

Technology Category

Application Category

๐Ÿ“ Abstract
This paper introduces the Extended Length Audio Dataset for Synthetic Voice Detection and Speaker Recognition (ELAD SVDSR), a resource specifically designed to facilitate the creation of high quality deepfakes and support the development of detection systems trained against them. The dataset comprises 45 minute audio recordings from 36 participants, each reading various newspaper articles recorded under controlled conditions and captured via five microphones of differing quality. By focusing on extended duration audio, ELAD SVDSR captures a richer range of speech attributes such as pitch contours, intonation patterns, and nuanced delivery enabling models to generate more realistic and coherent synthetic voices. In turn, this approach allows for the creation of robust deepfakes that can serve as challenging examples in datasets used to train and evaluate synthetic voice detection methods. As part of this effort, 20 deepfake voices have already been created and added to the dataset to showcase its potential. Anonymized metadata accompanies the dataset on speaker demographics. ELAD SVDSR is expected to spur significant advancements in audio forensics, biometric security, and voice authentication systems.
Problem

Research questions and friction points this paper is trying to address.

Developing extended-length audio dataset for synthetic voice detection
Creating realistic deepfakes to train robust detection systems
Supporting speaker recognition research with diverse speech attributes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extended audio dataset captures diverse speech attributes
Multiple microphone recordings enable robust deepfake generation
Dataset supports development of synthetic voice detection systems
๐Ÿ”Ž Similar Papers
No similar papers found.
R
Rahul Vijaykumar
Dept of Electrical and Computer Engineering, Clarkson University, Potsdam, NY , USA
A
Ajan Ahmed
Dept of Electrical and Computer Engineering, Clarkson University, Potsdam, NY , USA
J
John Parker
Dept of Electrical and Computer Engineering, Clarkson University, Potsdam, NY , USA
D
Dinesh Pendyala
Dept of Electrical and Computer Engineering, Clarkson University, Potsdam, NY , USA
A
Aidan Collins
Dept of Electrical and Computer Engineering, Clarkson University, Potsdam, NY , USA
Stephanie Schuckers
Stephanie Schuckers
Professor, Clarkson University
M
Masudul H. Imtiaz
Dept of Electrical and Computer Engineering, Clarkson University, Potsdam, NY , USA