TN-Eval: Rubric and Evaluation Protocols for Measuring the Quality of Behavioral Therapy Notes

📅 2025-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Behavioral therapy notes lack standardized quality criteria, impeding legal compliance and clinical utility. To address this, we propose the first multidimensional quality assessment framework specifically designed for behavioral therapy notes, centered on three core dimensions: completeness, conciseness, and faithfulness. Methodologically, we replace conventional Likert-scale evaluations with a novel structured rubric, supported by a manually annotated dataset and a standardized evaluation protocol. Our approach integrates expert co-design, dual-source data augmentation (human-written and LLM-generated notes), and fine-grained human annotation guided by the rubric, complemented by inter-annotator agreement analysis. Results demonstrate that rubric-based assessment yields superior reliability and interpretability. Empirical analysis reveals widespread deficiencies in completeness and conciseness among clinician-authored notes; while LLM-generated notes exhibit faithfulness limitations—particularly hallucinations—they achieve higher preference and scores from clinical practitioners in blinded evaluations.

Technology Category

Application Category

📝 Abstract
Behavioral therapy notes are important for both legal compliance and patient care. Unlike progress notes in physical health, quality standards for behavioral therapy notes remain underdeveloped. To address this gap, we collaborated with licensed therapists to design a comprehensive rubric for evaluating therapy notes across key dimensions: completeness, conciseness, and faithfulness. Further, we extend a public dataset of behavioral health conversations with therapist-written notes and LLM-generated notes, and apply our evaluation framework to measure their quality. We find that: (1) A rubric-based manual evaluation protocol offers more reliable and interpretable results than traditional Likert-scale annotations. (2) LLMs can mimic human evaluators in assessing completeness and conciseness but struggle with faithfulness. (3) Therapist-written notes often lack completeness and conciseness, while LLM-generated notes contain hallucination. Surprisingly, in a blind test, therapists prefer and judge LLM-generated notes to be superior to therapist-written notes.
Problem

Research questions and friction points this paper is trying to address.

Develop rubric for evaluating behavioral therapy notes quality
Compare therapist-written and LLM-generated notes using rubric
Assess LLMs' ability to mimic human note evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed rubric for therapy notes evaluation
Extended dataset with human and LLM notes
Applied rubric to compare note quality
🔎 Similar Papers
No similar papers found.
Raj Sanjay Shah
Raj Sanjay Shah
Ph.D student at Georgia Tech
Natural Language ProcessingComputational Cognitive Science
L
Lei Xu
AWS AI Labs
J
Jon Burnsky
AWS AI Labs
D
Drew Bertagnolli
OneMedical
C
Chaitanya P. Shivade
AWS AI Labs