🤖 AI Summary
Mental health stigma remains a critical barrier to help-seeking, and AI interventions often fail to address its sociocultural roots. Method: This longitudinal experimental study (N=216 quantitative; n=32 qualitative) compared unidirectional informational versus bidirectional collaborative chatbots over a two-week intervention, testing whether social contact mechanisms mediate stigma reduction. Contribution/Results: We provide the first empirical evidence that bidirectional human-AI collaboration significantly enhances user empathy (+37%), perceived AI trustworthiness (+2.1/5), and likability—leading to significantly greater reductions in both internalized and externalized stigma (p<0.01). Crucially, we identify “role–context congruence” as a foundational ethical design principle: misalignment between an AI’s assigned role (e.g., peer vs. clinician) and interaction context undermines trust and introduces ethical risks. The study establishes a replicable, theory-informed collaborative paradigm and design framework for AI-enabled mental health communication.
📝 Abstract
AI conversational agents have demonstrated efficacy in social contact interventions for stigma reduction at a low cost. However, the underlying mechanisms of how interaction designs contribute to these effects remain unclear. This study investigates how participating in three human-chatbot interactions affects attitudes toward mental illness. We developed three chatbots capable of engaging in either one-way information dissemination from chatbot to a human or two-way cooperation where the chatbot and a human exchange thoughts and work together on a cooperation task. We then conducted a two-week mixed-methods study to investigate variations over time and across different group memberships. The results indicate that human-AI cooperation can effectively reduce stigma toward individuals with mental illness by fostering relationships between humans and AI through social contact. Additionally, compared to a one-way chatbot, interacting with a cooperative chatbot led participants to perceive it as more competent and likable, promoting greater empathy during the conversation. However, despite the success in reducing stigma, inconsistencies between the chatbot's role and the mental health context raised concerns. We discuss the implications of our findings for human-chatbot interaction designs aimed at changing human attitudes.