Automated Data Enrichment using Confidence-Aware Fine-Grained Debate among Open-Source LLMs for Mental Health and Online Safety

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High annotation costs and difficulties in labeling dynamic, real-world indicators—such as life events and risk behaviors—for mental health analysis and online risk identification motivate this work. We propose an automated data augmentation method leveraging multiple open-source large language model (LLM) agents. Our core contribution is the Confidence-aware Fine-grained Debating (CFD) framework: multiple LLM agents engage in structured, evidence-based debates at a fine-grained level, integrating confidence estimation and consensus generation, with debate outputs explicitly incorporated into feature learning. Experiments on our curated mental health and online risk datasets demonstrate that CFD significantly outperforms diverse baselines. Incorporating debate-derived features improves performance on online safety tasks by 10.1%, validating CFD’s effectiveness in enhancing both data quality and downstream task accuracy.

Technology Category

Application Category

📝 Abstract
Real-world indicators are important for improving natural language processing (NLP) tasks such as life events for mental health analysis and risky behaviour for online safety, yet labelling such information in NLP training datasets is often costly and/or difficult given the dynamic nature of such events. This paper compares several LLM-based data enrichment methods and introduces a novel Confidence-Aware Fine-Grained Debate (CFD) framework in which multiple LLM agents simulate human annotators and exchange fine-grained evidence to reach consensus. We describe two new expert-annotated datasets, a mental health Reddit wellbeing dataset and an online safety Facebook sharenting risk dataset. Our CFD framework achieves the most robust data enrichment performance compared to a range of baselines and we show that this type of data enrichment consistently improves downstream tasks. Enriched features incorporated via debate transcripts yield the largest gains, outperforming the non-enriched baseline by 10.1% for the online safety task.
Problem

Research questions and friction points this paper is trying to address.

Enriching NLP datasets with real-world indicators for mental health and online safety
Reducing costly manual labeling through automated LLM-based data enrichment
Improving downstream task performance via confidence-aware multi-agent debate framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multiple LLM agents simulate human annotators for consensus.
Confidence-aware fine-grained debate framework exchanges detailed evidence.
Debate transcripts enrich features, boosting downstream task performance.
🔎 Similar Papers
No similar papers found.
J
Junyu Mao
University of Southampton, UK
A
Anthony Hills
Queen Mary University of London, UK
T
Talia Tseriotou
Queen Mary University of London, UK
Maria Liakata
Maria Liakata
Professor Queen Mary University of London/University of Warwick, Alan Turing Institute AI Fellow
Natural Language processing (NLP)Semantics & DiscourseBioNLP & NLP for Mental HealthSocial MediaMachine Learning
A
Aya Shamir
Bar Ilan University, Israel
D
Dan Sayda
Bar Ilan University, Israel
D
Dana Atzil-Slonim
Bar Ilan University, Israel
N
Natalie Djohari
University of Southampton, UK
A
Arpan Mandal
University of Southampton, UK
Silke Roth
Silke Roth
University of Southampton, UK
P
Pamela Ugwudike
University of Southampton, UK
Mahesan Niranjan
Mahesan Niranjan
University of Southampton
S
Stuart E. Middleton
University of Southampton, UK