🤖 AI Summary
High annotation costs and difficulties in labeling dynamic, real-world indicators—such as life events and risk behaviors—for mental health analysis and online risk identification motivate this work. We propose an automated data augmentation method leveraging multiple open-source large language model (LLM) agents. Our core contribution is the Confidence-aware Fine-grained Debating (CFD) framework: multiple LLM agents engage in structured, evidence-based debates at a fine-grained level, integrating confidence estimation and consensus generation, with debate outputs explicitly incorporated into feature learning. Experiments on our curated mental health and online risk datasets demonstrate that CFD significantly outperforms diverse baselines. Incorporating debate-derived features improves performance on online safety tasks by 10.1%, validating CFD’s effectiveness in enhancing both data quality and downstream task accuracy.
📝 Abstract
Real-world indicators are important for improving natural language processing (NLP) tasks such as life events for mental health analysis and risky behaviour for online safety, yet labelling such information in NLP training datasets is often costly and/or difficult given the dynamic nature of such events. This paper compares several LLM-based data enrichment methods and introduces a novel Confidence-Aware Fine-Grained Debate (CFD) framework in which multiple LLM agents simulate human annotators and exchange fine-grained evidence to reach consensus. We describe two new expert-annotated datasets, a mental health Reddit wellbeing dataset and an online safety Facebook sharenting risk dataset. Our CFD framework achieves the most robust data enrichment performance compared to a range of baselines and we show that this type of data enrichment consistently improves downstream tasks. Enriched features incorporated via debate transcripts yield the largest gains, outperforming the non-enriched baseline by 10.1% for the online safety task.