🤖 AI Summary
Large language models (LLMs) lack fine-grained, interpretable evaluation metrics for clinically grounded note generation. Method: This paper introduces a Process-supervised Reward Model (PRM) tailored for clinical note generation—marking the first adaptation of PRM to medical text generation. Clinical experts define critical generation steps, and process supervision data are scalably constructed using Gemini-Pro 1.5. The model is implemented via fine-tuning LLaMA-3.1 8B-Instruct with a multi-stage loss function and rigorous data filtering. Contribution/Results: PRM achieves 98.8% accuracy in gold-standard identification—outperforming conventional outcome-based reward models (ORMs) by 28.8 percentage points—and attains 56.2% accuracy in physician preference selection, a 18.7-point improvement over ORMs. These results demonstrate significantly enhanced modeling of clinical guidelines and expert preferences, validating PRM’s effectiveness and generalizability in healthcare NLP.
📝 Abstract
Process-supervised reward models (PRMs), which verify large language model (LLM) outputs step-by-step, have achieved significant success in mathematical and coding problems. However, their application to other domains remains largely unexplored. In this work, we train a PRM to provide step-level reward signals for clinical notes generated by LLMs from patient-doctor dialogues. Guided by real-world clinician expertise, we carefully designed step definitions for clinical notes and utilized Gemini-Pro 1.5 to automatically generate process supervision data at scale. Our proposed PRM, trained on the LLaMA-3.1 8B instruct model, outperformed both Gemini-Pro 1.5 and the vanilla outcome-supervised reward model (ORM) in two key evaluations: (1) selecting gold-reference samples from error-containing ones, achieving 98.8% accuracy (versus 70.0% for the vanilla ORM and 93.8% for Gemini-Pro 1.5), and (2) selecting physician-preferred notes, achieving 56.2% accuracy (compared to 37.5% for the vanilla ORM and 50.0% for Gemini-Pro 1.5). Additionally, we conducted ablation studies to determine optimal loss functions and data selection strategies, along with physician reader studies to explore predictors of downstream Best-of-N performance. Our promising results suggest the potential of PRMs to extend beyond the clinical domain, offering a scalable and effective solution for diverse generative tasks.