Protecting Vulnerable Voices: Synthetic Dataset Generation for Self-Disclosure Detection

๐Ÿ“… 2025-07-24
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing research is hindered by the absence of open, comprehensively annotated datasets for personally identifiable information (PII) self-disclosure, impeding reproducible privacy risk assessment. To address this, we propose the first fine-grained taxonomy of 19 PII leakage categories specifically designed for vulnerable populations. Leveraging multiple large language models (LLaMA-2-7B, LLaMA-3-8B, and Zephyr-7B-beta) with sequential instruction prompting, we generate high-fidelity, multi-segment synthetic Reddit posts. Through rigorous de-identification and humanโ€“automated collaborative validation, we ensure the data is provably non-reidentifiable, indistinguishable from real posts to human annotators, and functionally equivalent to real data for model training. We publicly release a high-quality, privacy-preserving synthetic dataset alongside full source code, establishing a robust, standardized benchmark for PII risk identification research.

Technology Category

Application Category

๐Ÿ“ Abstract
Social platforms such as Reddit have a network of communities of shared interests, with a prevalence of posts and comments from which one can infer users' Personal Information Identifiers (PIIs). While such self-disclosures can lead to rewarding social interactions, they pose privacy risks and the threat of online harms. Research into the identification and retrieval of such risky self-disclosures of PIIs is hampered by the lack of open-source labeled datasets. To foster reproducible research into PII-revealing text detection, we develop a novel methodology to create synthetic equivalents of PII-revealing data that can be safely shared. Our contributions include creating a taxonomy of 19 PII-revealing categories for vulnerable populations and the creation and release of a synthetic PII-labeled multi-text span dataset generated from 3 text generation Large Language Models (LLMs), Llama2-7B, Llama3-8B, and zephyr-7b-beta, with sequential instruction prompting to resemble the original Reddit posts. The utility of our methodology to generate this synthetic dataset is evaluated with three metrics: First, we require reproducibility equivalence, i.e., results from training a model on the synthetic data should be comparable to those obtained by training the same models on the original posts. Second, we require that the synthetic data be unlinkable to the original users, through common mechanisms such as Google Search. Third, we wish to ensure that the synthetic data be indistinguishable from the original, i.e., trained humans should not be able to tell them apart. We release our dataset and code at https://netsys.surrey.ac.uk/datasets/synthetic-self-disclosure/ to foster reproducible research into PII privacy risks in online social media.
Problem

Research questions and friction points this paper is trying to address.

Lack of open-source labeled datasets for PII self-disclosure detection
Privacy risks from self-disclosed personal information on social platforms
Need for synthetic PII data to enable safe, reproducible research
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic dataset generation using multiple LLMs
Taxonomy of 19 PII-revealing categories
Sequential instruction prompting for realism