🤖 AI Summary
Current AI-powered mental health chatbots predominantly rely on single-disciplinary inputs, lacking cross-disciplinary integration across the entire system lifecycle—thus failing to meet the EU AI Act’s stringent requirements for value alignment and regulatory compliance in high-risk AI systems. To address this, we propose a collaborative design framework integrating expertise from computer science, clinical psychology, ethics, and law, spanning requirement specification, model development, deployment validation, and continuous monitoring. Our approach embeds NLP capability assessment, dynamic ethical review, legal compliance mapping, and lifecycle governance mechanisms to balance safety, clinical utility, and regulatory adaptability. We quantitatively analyze cost–benefit ratios at critical collaboration junctures, identify optimal timing for interdisciplinary intervention, and specify the minimal viable team composition. This work establishes the first reusable, auditable, and scalable collaborative engineering paradigm tailored for high-risk AI-based psychological interventions.
📝 Abstract
Artificial intelligence has been introduced as a way to improve access to mental health support. However, most AI mental health chatbots rely on a limited range of disciplinary input, and fail to integrate expertise across the chatbot's lifecycle. This paper examines the cost-benefit trade-off of interdisciplinary collaboration in AI mental health chatbots. We argue that involving experts from technology, healthcare, ethics, and law across key lifecycle phases is essential to ensure value-alignment and compliance with the high-risk requirements of the AI Act. We also highlight practical recommendations and existing frameworks to help balance the challenges and benefits of interdisciplinarity in mental health chatbots.