🤖 AI Summary
Clinical deployment of LLMs faces critical challenges including severe hallucination, factual inconsistency, and insufficient clinician controllability. To address these, we propose an automated clinical data augmentation framework where the LLM acts as a human proxy—modeling physician intent to generate high-quality, conditionally constrained training samples. Our approach enables fine-grained, real-time clinician intervention in the generation process without increasing model complexity or cognitive load. It integrates conditional text generation, BioNLP-specific fine-tuning, and task-adaptive augmentation. On the ACL’24 BioNLP “Discharge Me!” benchmark, our method achieves a new state-of-the-art: +34% improvement over the baseline (vs. +9% without augmentation). Human evaluation confirms significant gains in relevance, accuracy, and factual consistency. This work introduces the novel “LLM as human proxy” paradigm, establishing a scalable, trustworthy, and controllable generation framework for clinical NLP.
📝 Abstract
Deploying natural language generation systems in clinical settings remains challenging despite advances in Large Language Models (LLMs), which continue to exhibit hallucinations and factual inconsistencies, necessitating human oversight. This paper explores automated dataset augmentation using LLMs as human proxies to condition LLMs for clinician control without increasing cognitive workload. On the BioNLP ACL'24 Discharge Me! Shared Task, we achieve new state-of-the-art results with simpler methods than prior submissions through more efficient training, yielding a 9% relative improvement without augmented training and up to 34% with dataset augmentation. Preliminary human evaluation further supports the effectiveness of our approach, highlighting the potential of augmenting clinical text generation for control to enhance relevance, accuracy, and factual consistency.