Optimizing Long-Form Clinical Text Generation with Claim-Based Rewards

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses three key challenges in clinical long-text generation: poor factual consistency, insufficient content completeness, and misalignment with clinical documentation priorities. To this end, we propose a reinforcement learning framework that eliminates the need for human annotations or a separate reward model. Methodologically, our approach integrates DocLens—a statement-level evaluator—with Grouped Relative Policy Optimization (GRPO) and introduces a reward-gating mechanism to reduce training overhead. We employ deterministic, dialogue-aware, statement-level rewards to directly optimize generation quality, enabling flexible customization of clinical objectives. Experimental results demonstrate substantial improvements in factual accuracy, content completeness, and linguistic conciseness of clinical notes, while effectively mitigating information omission and hallucination. Human evaluation using GPT-5 shows strong preference for outputs generated by our method, and training costs are reduced by 32% compared to baseline approaches.

Technology Category

Application Category

📝 Abstract
Automating clinical documentation with large language models requires precise alignment with priorities such as completeness and factual grounding. We present an evaluation-integrated reinforcement learning framework for long-form clinical text generation that couples Group Relative Policy Optimization (GRPO) with DocLens, a claim-level evaluator that provides deterministic, dialogue-grounded rewards. Our method directly optimizes factual grounding and completeness without training a separate reward model or relying on human-authored references. Empirically, the approach improves clinical note quality and reduces training cost via a simple reward-gating strategy. An independent GPT-5 qualitative evaluation further supports these gains, showing higher preference for GRPO outputs in factuality, completeness, and brevity, with fewer omissions and hallucinations. Because the benchmarks are relatively clean and the base model already well aligned, these improvements likely represent a conservative lower bound. The framework is scalable to real-world settings and can incorporate custom objectives such as guideline adherence or billing preferences.
Problem

Research questions and friction points this paper is trying to address.

Optimizing long-form clinical text generation using claim-based rewards
Improving factual grounding and completeness in automated clinical documentation
Reducing training costs while minimizing omissions and hallucinations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses reinforcement learning with claim-level evaluator
Optimizes factual grounding without reward model
Reduces training cost via reward-gating strategy
🔎 Similar Papers
No similar papers found.
S
Samyak Jhaveri
Oracle Health AI
P
Praphul Singh
Oracle Health AI
J
Jangwon Kim
Oracle Health AI
T
Tara Taghavi
Oracle Health AI
Krishnaram Kenthapadi
Krishnaram Kenthapadi
Oracle Health AI
Fairness/Transparency/Explainability/Privacy in AI/ML SystemsAlgorithmsData MiningSocial