🤖 AI Summary
This study investigates the clinical applicability of large language models (LLMs) in cognitive behavioral therapy (CBT) support. To systematically evaluate LLM capabilities, we introduce CBT-Bench—the first multi-level benchmark for CBT-oriented evaluation—comprising three task categories: knowledge acquisition, cognitive model comprehension, and therapeutic response generation. We propose a structured evaluation framework aligned with core CBT competencies, explicitly distinguishing three hierarchical capabilities: factual knowledge recall, cognitive structure analysis, and empathetic dialogue generation. Our LLM-driven, multi-task evaluation integrates multiple-choice, classification, and open-ended generation tasks. Experimental results show that while LLMs perform well on foundational CBT knowledge questions, they exhibit significant deficiencies in higher-order tasks—particularly cognitive distortion identification and personalized therapeutic response generation—revealing critical limitations for real-world clinical deployment.
📝 Abstract
There is a significant gap between patient needs and available mental health support today. In this paper, we aim to thoroughly examine the potential of using Large Language Models (LLMs) to assist professional psychotherapy. To this end, we propose a new benchmark, CBT-BENCH, for the systematic evaluation of cognitive behavioral therapy (CBT) assistance. We include three levels of tasks in CBT-BENCH: I: Basic CBT knowledge acquisition, with the task of multiple-choice questions; II: Cognitive model understanding, with the tasks of cognitive distortion classification, primary core belief classification, and fine-grained core belief classification; III: Therapeutic response generation, with the task of generating responses to patient speech in CBT therapy sessions. These tasks encompass key aspects of CBT that could potentially be enhanced through AI assistance, while also outlining a hierarchy of capability requirements, ranging from basic knowledge recitation to engaging in real therapeutic conversations. We evaluated representative LLMs on our benchmark. Experimental results indicate that while LLMs perform well in reciting CBT knowledge, they fall short in complex real-world scenarios requiring deep analysis of patients' cognitive structures and generating effective responses, suggesting potential future work.