🤖 AI Summary
To address the challenges of patient privacy preservation, on-premise deployment, and computational efficiency in clinical note generation, this paper proposes LLaMA-Clinic—a specialized system built upon LLaMA-2 13B. It introduces DistillDirect, a novel distillation framework integrating online policy-based reinforcement learning guided by Gemini 1.0 Pro as a teacher model, strictly enforcing predefined clinical formatting standards (i.e., format adherence is rule-governed, not model-autonomous). The system follows a three-stage adaptation pipeline: domain-adaptive pretraining, supervised fine-tuning, and reinforcement learning from AI/human feedback (RLED), augmented by clinical-domain corpus construction and structured format constraint modeling. Blind evaluation shows 90.4% of generated notes meet “acceptable” or higher quality thresholds; critically, the “Assessment and Plan” section achieves a real-world readiness score of 4.2/5—surpassing physician-written notes (4.1/5)—demonstrating the efficacy of a lightweight, regulatory-compliant, and high-fidelity specialty-level clinical generation system.
📝 Abstract
Proprietary Large Language Models (LLMs) such as GPT-4 and Gemini have demonstrated promising capabilities in clinical text summarization tasks. However, due to patient data privacy concerns and computational costs, many healthcare providers prefer using small, locally-hosted models over external generic LLMs. This study presents a comprehensive domain- and task-specific adaptation process for the open-source LLaMA-2 13 billion parameter model, enabling it to generate high-quality clinical notes from outpatient patient-doctor dialogues. Our process incorporates continued pre-training, supervised fine-tuning, and reinforcement learning from both AI and human feedback. We introduced a new approach, DistillDirect, for performing on-policy reinforcement learning with Gemini 1.0 Pro as the teacher model. Our resulting model, LLaMA-Clinic, can generate clinical notes comparable in quality to those authored by physicians. In a blinded physician reader study, the majority (90.4%) of individual evaluations rated the notes generated by LLaMA-Clinic as"acceptable"or higher across all three criteria: real-world readiness, completeness, and accuracy. In the more challenging"Assessment and Plan"section, LLaMA-Clinic scored higher (4.2/5) in real-world readiness than physician-authored notes (4.1/5). We highlight key considerations for future clinical note-generation tasks, emphasizing the importance of pre-defining a best-practice note format, rather than relying on LLMs to determine this for clinical practice.