๐ค AI Summary
This study addresses the challenges of scaling, standardizing, and objectivizing clinical communication competency assessment for medical studentsโtasks traditionally reliant on subjective expert judgment. Methodologically, it introduces a human-preference-aligned, interpretable automated evaluation framework that uniquely integrates fuzzy logic with large language models (LLMs). A fine-grained fuzzy annotation scheme is constructed across four dimensions: professionalism, medical relevance, ethical conduct, and contextual interference; LLM scoring capability is optimized via prompt engineering and supervised fine-tuning (SFT). Experimental results demonstrate an overall assessment accuracy exceeding 80%, with over 90% accuracy on core dimensions. The work advances quantitative modeling of clinical judgment, significantly enhancing scalability, consistency, and interpretability in communication competency assessment within medical education.
๐ Abstract
Clinical communication skills are critical in medical education, and practicing and assessing clinical communication skills on a scale is challenging. Although LLM-powered clinical scenario simulations have shown promise in enhancing medical students' clinical practice, providing automated and scalable clinical evaluation that follows nuanced physician judgment is difficult. This paper combines fuzzy logic and Large Language Model (LLM) and proposes LLM-as-a-Fuzzy-Judge to address the challenge of aligning the automated evaluation of medical students' clinical skills with subjective physicians' preferences. LLM-as-a-Fuzzy-Judge is an approach that LLM is fine-tuned to evaluate medical students' utterances within student-AI patient conversation scripts based on human annotations from four fuzzy sets, including Professionalism, Medical Relevance, Ethical Behavior, and Contextual Distraction. The methodology of this paper started from data collection from the LLM-powered medical education system, data annotation based on multidimensional fuzzy sets, followed by prompt engineering and the supervised fine-tuning (SFT) of the pre-trained LLMs using these human annotations. The results show that the LLM-as-a-Fuzzy-Judge achieves over 80% accuracy, with major criteria items over 90%, effectively leveraging fuzzy logic and LLM as a solution to deliver interpretable, human-aligned assessment. This work suggests the viability of leveraging fuzzy logic and LLM to align with human preferences, advances automated evaluation in medical education, and supports more robust assessment and judgment practices. The GitHub repository of this work is available at https://github.com/2sigmaEdTech/LLMAsAJudge