🤖 AI Summary
Despite widespread adoption of domain-specific fine-tuning for biomedical large language models (LLMs), there remains a lack of systematic empirical evidence on its actual impact on clinical capabilities. Method: We conduct a comprehensive evaluation of 25 state-of-the-art LLMs—including both general-purpose and biomedical-fine-tuned variants—across six standardized clinical tasks, using CLUE, a reproducible, open-source medical evaluation framework (with all code and data publicly released). Contribution/Results: Our study is the first to empirically demonstrate that most biomedical-fine-tuned models underperform general-purpose models in critical clinical competencies—including hallucination suppression, ICD-10 coding accuracy, and instruction following. Notably, Llama-3.1-70B-Instruct surpasses specialized biomedical models across multiple tasks, revealing inherent trade-offs in domain adaptation. These findings challenge the prevailing assumption that biomedical fine-tuning inherently enhances clinical performance, establishing a rigorous empirical benchmark and offering methodological insights for LLM deployment in healthcare.
📝 Abstract
Large Language Models (LLMs) are expected to significantly contribute to patient care, diagnostics, and administrative processes. Emerging biomedical LLMs aim to address healthcare-specific challenges, including privacy demands and computational constraints. Assessing the models' suitability for this sensitive application area is of the utmost importance. However, biomedical training has not been systematically evaluated on medical tasks. This study investigates the effect of biomedical training in the context of six practical medical tasks evaluating $25$ models. In contrast to previous evaluations, our results reveal a performance decline in nine out of twelve biomedical models after fine-tuning, particularly on tasks involving hallucinations, ICD10 coding, and instruction adherence. General-domain models like Meta-Llama-3.1-70B-Instruct outperformed their biomedical counterparts, indicating a trade-off between domain-specific fine-tuning and general medical task performance. We open-source all evaluation scripts and datasets at https://github.com/TIO-IKIM/CLUE to support further research in this critical area.