Adversarial Training for Failure-Sensitive User Simulation in Mental Health Dialogue Optimization

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Mental health task-oriented dialogue (TOD) systems lack high-fidelity, failure-sensitive user simulators. Method: We propose the first adversarial user simulation framework for this domain, comprising a fine-tuned LLM-based generator and a binary classifier discriminator. Through iterative adversarial training, the generator learns to match the failure distribution of real users, with KL divergence and correlation between simulated and actual failure rates serving as distributional fidelity metrics. Contribution/Results: This is the first application of adversarial training to mental health TOD user simulation. It significantly improves failure exposure capability and failure-mode diversity. Experiments show strong correlation (r > 0.9) between simulated and real failure rates, a 42% reduction in KL divergence of failure distributions, and a 37% drop in discriminator accuracy within three training rounds—demonstrating substantially enhanced simulation authenticity.

Technology Category

Application Category

📝 Abstract
Realistic user simulation is crucial for training and evaluating task-oriented dialogue (TOD) systems, yet creating simulators that accurately replicate human behavior remains challenging. A key property of effective simulators is their ability to expose failure modes of the systems they evaluate. We present an adversarial training framework that iteratively improves user simulator realism through a competitive dynamic between a generator (user simulator) and a discriminator. Applied to mental health support chatbots, our approach demonstrates that fine-tuned simulators dramatically outperform zero-shot base models at surfacing system issues, and adversarial training further enhances diversity, distributional alignment, and predictive validity. The resulting simulator achieves a strong correlation between simulated and real failure occurrence rates across diverse chatbot configurations while maintaining low distributional divergence of failure modes. Discriminator accuracy decreases drastically after three adversarial iterations, suggesting improved realism. These results provide evidence that adversarial training is a promising approach for creating realistic user simulators in mental health support TOD domains, enabling rapid, reliable, and cost-effective system evaluation before deployment.
Problem

Research questions and friction points this paper is trying to address.

Develops adversarial training for realistic user simulation in mental health chatbots
Enhances simulator ability to expose failure modes in dialogue systems
Improves evaluation reliability and cost-effectiveness before system deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial training framework improves user simulator realism
Generator and discriminator compete to expose system failure modes
Fine-tuned simulators outperform base models in mental health chatbots
Ziyi Zhu
Ziyi Zhu
Intel, Columbia University
Optical Interconnects
O
Olivier Tieleman
Slingshot AI
C
Caitlin A. Stamatis
Slingshot AI
L
Luka Smyth
Slingshot AI
T
Thomas D. Hull
Slingshot AI
D
Daniel R. Cahn
Slingshot AI
M
Matteo Malgaroli
Department of Psychiatry, NYU School of Medicine