Process-Supervised Reward Models for Verifying Clinical Note Generation: A Scalable Approach Guided by Domain Expertise

📅 2024-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) lack fine-grained, interpretable evaluation metrics for clinically grounded note generation. Method: This paper introduces a Process-supervised Reward Model (PRM) tailored for clinical note generation—marking the first adaptation of PRM to medical text generation. Clinical experts define critical generation steps, and process supervision data are scalably constructed using Gemini-Pro 1.5. The model is implemented via fine-tuning LLaMA-3.1 8B-Instruct with a multi-stage loss function and rigorous data filtering. Contribution/Results: PRM achieves 98.8% accuracy in gold-standard identification—outperforming conventional outcome-based reward models (ORMs) by 28.8 percentage points—and attains 56.2% accuracy in physician preference selection, a 18.7-point improvement over ORMs. These results demonstrate significantly enhanced modeling of clinical guidelines and expert preferences, validating PRM’s effectiveness and generalizability in healthcare NLP.

Technology Category

Application Category

📝 Abstract
Process-supervised reward models (PRMs), which verify large language model (LLM) outputs step-by-step, have achieved significant success in mathematical and coding problems. However, their application to other domains remains largely unexplored. In this work, we train a PRM to provide step-level reward signals for clinical notes generated by LLMs from patient-doctor dialogues. Guided by real-world clinician expertise, we carefully designed step definitions for clinical notes and utilized Gemini-Pro 1.5 to automatically generate process supervision data at scale. Our proposed PRM, trained on the LLaMA-3.1 8B instruct model, outperformed both Gemini-Pro 1.5 and the vanilla outcome-supervised reward model (ORM) in two key evaluations: (1) selecting gold-reference samples from error-containing ones, achieving 98.8% accuracy (versus 70.0% for the vanilla ORM and 93.8% for Gemini-Pro 1.5), and (2) selecting physician-preferred notes, achieving 56.2% accuracy (compared to 37.5% for the vanilla ORM and 50.0% for Gemini-Pro 1.5). Additionally, we conducted ablation studies to determine optimal loss functions and data selection strategies, along with physician reader studies to explore predictors of downstream Best-of-N performance. Our promising results suggest the potential of PRMs to extend beyond the clinical domain, offering a scalable and effective solution for diverse generative tasks.
Problem

Research questions and friction points this paper is trying to address.

Verifying clinical note generation using PRMs
Improving accuracy in selecting physician-preferred notes
Exploring PRMs for diverse generative tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Process-supervised reward models (PRMs)
Step-level reward signals for clinical notes
Gemini-Pro 1.5 for process supervision
🔎 Similar Papers
No similar papers found.
Hanyin Wang
Hanyin Wang
Mayo Clinic Health System, University of Illinois Urbana-Champaign
LLMs for Healthcare
Chufan Gao
Chufan Gao
University of Illinois Urbana-Champaign
Machine Learning for HealthcareNatural Language Processing
Q
Qiping Xu
Mayo Clinic Health System
B
Bolun Liu
Mayo Clinic Health System
G
Guleid Hussein
Mayo Clinic Health System
H
H. Korsapati
Mayo Clinic Health System
M
Mohamad El Labban
Mayo Clinic Health System
K
Kingsley Iheasirim
Mayo Clinic Health System
M
Mohamed Hassan
Mayo Clinic Health System
G
Gokhan Anil
Mayo Clinic Health System
B
Brian N. Bartlett
Mayo Clinic Health System
Jimeng Sun
Jimeng Sun
Professor at University of Illinois Urbana-Champaign
AI for healthcareMachine learning for healthcaredeep learning for healthcare