CBT-Bench: Evaluating Large Language Models on Assisting Cognitive Behavior Therapy

📅 2024-10-17
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the clinical applicability of large language models (LLMs) in cognitive behavioral therapy (CBT) support. To systematically evaluate LLM capabilities, we introduce CBT-Bench—the first multi-level benchmark for CBT-oriented evaluation—comprising three task categories: knowledge acquisition, cognitive model comprehension, and therapeutic response generation. We propose a structured evaluation framework aligned with core CBT competencies, explicitly distinguishing three hierarchical capabilities: factual knowledge recall, cognitive structure analysis, and empathetic dialogue generation. Our LLM-driven, multi-task evaluation integrates multiple-choice, classification, and open-ended generation tasks. Experimental results show that while LLMs perform well on foundational CBT knowledge questions, they exhibit significant deficiencies in higher-order tasks—particularly cognitive distortion identification and personalized therapeutic response generation—revealing critical limitations for real-world clinical deployment.

Technology Category

Application Category

📝 Abstract
There is a significant gap between patient needs and available mental health support today. In this paper, we aim to thoroughly examine the potential of using Large Language Models (LLMs) to assist professional psychotherapy. To this end, we propose a new benchmark, CBT-BENCH, for the systematic evaluation of cognitive behavioral therapy (CBT) assistance. We include three levels of tasks in CBT-BENCH: I: Basic CBT knowledge acquisition, with the task of multiple-choice questions; II: Cognitive model understanding, with the tasks of cognitive distortion classification, primary core belief classification, and fine-grained core belief classification; III: Therapeutic response generation, with the task of generating responses to patient speech in CBT therapy sessions. These tasks encompass key aspects of CBT that could potentially be enhanced through AI assistance, while also outlining a hierarchy of capability requirements, ranging from basic knowledge recitation to engaging in real therapeutic conversations. We evaluated representative LLMs on our benchmark. Experimental results indicate that while LLMs perform well in reciting CBT knowledge, they fall short in complex real-world scenarios requiring deep analysis of patients' cognitive structures and generating effective responses, suggesting potential future work.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Cognitive Behavioral Therapy
Psychological State Understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

CBT-BENCH
Large Language Models
Cognitive Behavioral Therapy
🔎 Similar Papers
No similar papers found.
Mian Zhang
Mian Zhang
University of Texas at Dallas
LLM
X
Xianjun Yang
Department of Computer Science, University of California, Santa Barbara
Xinlu Zhang
Xinlu Zhang
University of California, Santa Barbara
Machine LearningNatural Language ProcessingTime Series ModelingMultimodal Learning
T
Travis Labrum
School of Social Work, University of Pittsburgh
J
Jamie C. Chiu
Department of Psychology, Princeton University
S
S. Eack
School of Social Work, University of Pittsburgh
F
Fei Fang
School of Computer Science, Carnegie Mellon University
William Yang Wang
William Yang Wang
Mellichamp Chair Professor, University of California, Santa Barbara
Natural Language ProcessingMachine LearningArtificial IntelligenceLanguage and Vision
Zhiyu Zoey Chen
Zhiyu Zoey Chen
Assistant Professor, the University of Texas at Dallas
Artificial IntelligenceNatural Language ProcessingAI for Health