🤖 AI Summary
This study addresses three key challenges in clinical long-text generation: poor factual consistency, insufficient content completeness, and misalignment with clinical documentation priorities. To this end, we propose a reinforcement learning framework that eliminates the need for human annotations or a separate reward model. Methodologically, our approach integrates DocLens—a statement-level evaluator—with Grouped Relative Policy Optimization (GRPO) and introduces a reward-gating mechanism to reduce training overhead. We employ deterministic, dialogue-aware, statement-level rewards to directly optimize generation quality, enabling flexible customization of clinical objectives. Experimental results demonstrate substantial improvements in factual accuracy, content completeness, and linguistic conciseness of clinical notes, while effectively mitigating information omission and hallucination. Human evaluation using GPT-5 shows strong preference for outputs generated by our method, and training costs are reduced by 32% compared to baseline approaches.
📝 Abstract
Automating clinical documentation with large language models requires precise alignment with priorities such as completeness and factual grounding. We present an evaluation-integrated reinforcement learning framework for long-form clinical text generation that couples Group Relative Policy Optimization (GRPO) with DocLens, a claim-level evaluator that provides deterministic, dialogue-grounded rewards. Our method directly optimizes factual grounding and completeness without training a separate reward model or relying on human-authored references. Empirically, the approach improves clinical note quality and reduces training cost via a simple reward-gating strategy. An independent GPT-5 qualitative evaluation further supports these gains, showing higher preference for GRPO outputs in factuality, completeness, and brevity, with fewer omissions and hallucinations. Because the benchmarks are relatively clean and the base model already well aligned, these improvements likely represent a conservative lower bound. The framework is scalable to real-world settings and can incorporate custom objectives such as guideline adherence or billing preferences.