A recent evaluation on the performance of LLMs on radiation oncology physics using questions of randomly shuffled options

📅 2024-12-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates large language models’ (LLMs) domain-specific knowledge comprehension and reasoning capabilities in radiation therapy physics—a highly specialized, safety-critical medical discipline. Method: We construct a standardized, expert-validated multiple-choice test set, introducing two novel methodological innovations: (1) randomized option reordering to mitigate positional bias, and (2) “none of the above” ablation trials to assess calibration and confidence calibration. We further employ explanation-first, step-by-step structured prompting to enhance reasoning transparency and consistency. Results: Experiments span five state-of-the-art models—o1-preview, GPT-4o, LLaMA 3.1-405B, Gemini 1.5 Pro, and Claude 3.5 Sonnet—and benchmark against majority voting by board-certified medical physicists. o1-preview achieves super-expert accuracy; explanation-first and stepwise prompting significantly improve reasoning stability and robustness for LLaMA 3.1, Gemini 1.5 Pro, and Claude 3.5. This work establishes a reproducible, rigorous evaluation paradigm for trustworthy LLM deployment in medical physics.

Technology Category

Application Category

📝 Abstract
Purpose: We present an updated study evaluating the performance of large language models (LLMs) in answering radiation oncology physics questions, focusing on the recently released models. Methods: A set of 100 multiple choice radiation oncology physics questions, previously created by a well-experienced physicist, was used for this study. The answer options of the questions were randomly shuffled to create"new"exam sets. Five LLMs (OpenAI o1-preview, GPT-4o, LLaMA 3.1 (405B), Gemini 1.5 Pro, and Claude 3.5 Sonnet) with the versions released before September 30, 2024, were queried using these new exam sets. To evaluate their deductive reasoning capabilities, the correct answers in the questions were replaced with"None of the above."Then, the explaining-first and step-by-step instruction prompts were used to test if this strategy improved their reasoning capabilities. The performance of the LLMs was compared with the answers from medical physicists. Results: All models demonstrated expert-level performance on these questions, with o1-preview even surpassing medical physicists with a majority vote. When replacing the correct answers with"None of the above,"all models exhibited a considerable decline in performance, suggesting room for improvement. The explaining-first and step-by-step instruction prompts helped enhance the reasoning capabilities of the LLaMA 3.1 (405B), Gemini 1.5 Pro, and Claude 3.5 Sonnet models. Conclusion: These recently released LLMs demonstrated expert-level performance in answering radiation oncology physics questions, exhibiting great potential to assist in radiation oncology physics training and education.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Radiation Therapy Physics
Knowledge Understanding and Reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Radiation Therapy Physics
Enhanced Reasoning Techniques
🔎 Similar Papers
No similar papers found.
Peilong Wang
Peilong Wang
City of Hope
PhysicsAIImaging
J
J. Holmes
Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054
Z
Zheng Liu
School of Computing, University of Georgia, Athens, GA 30602
D
Dequan Chen
Department of Radiology, Mayo Clinic, Rochester, MN 55905
Tianming Liu
Tianming Liu
Distinguished Research Professor of Computer Science, University of Georgia
BrainBrain-Inspired AILLMArtificial General IntelligenceQuantum AI
J
Jiajian Shen
Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054
W
Wei Liu
Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054