When Chain-of-Thought Backfires: Evaluating Prompt Sensitivity in Medical Language Models

๐Ÿ“… 2026-03-26
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses the lack of systematic evaluation of prompt sensitivity in existing medical large language models, where general-purpose prompting strategies may underperform or even degrade performance in medical question answering. We present the first comprehensive assessment of MedGemmaโ€™s prompt robustness on MedMCQA and PubMedQA, examining techniques including chain-of-thought reasoning, few-shot exemplars, answer option permutation, and context truncation. Our findings reveal that chain-of-thought prompting reduces accuracy by 5.7%, while few-shot examples incur an 11.9% performance drop, highlighting the modelโ€™s high sensitivity to prompt perturbations. To mitigate this, we propose two robust inference methods: one based on cloze-style probability scoring, which consistently outperforms all evaluated prompting strategies, and another employing permutation-based voting, which improves accuracy by 4 percentage points. Notably, backward context truncation results in only a 3% performance loss, further demonstrating the reliability of our approach.
๐Ÿ“ Abstract
Large Language Models (LLMs) are increasingly deployed in medical settings, yet their sensitivity to prompt formatting remains poorly characterized. We evaluate MedGemma (4B and 27B parameters) on MedMCQA (4,183 questions) and PubMedQA (1,000 questions) across a broad suite of robustness tests. Our experiments reveal several concerning findings. Chain-of-Thought (CoT) prompting decreases accuracy by 5.7% compared to direct answering. Few-shot examples degrade performance by 11.9% while increasing position bias from 0.14 to 0.47. Shuffling answer options causes the model to change predictions 59.1% of the time, with accuracy dropping up to 27.4 percentage points. Front-truncating context to 50% causes accuracy to plummet below the no-context baseline, yet back-truncation preserves 97% of full-context accuracy. We further show that cloze scoring (selecting the highest log-probability option token) achieves 51.8% (4B) and 64.5% (27B), surpassing all prompting strategies and revealing that models "know" more than their generated text shows. Permutation voting recovers 4 percentage points over single-ordering inference. These results demonstrate that prompt engineering techniques validated on general-purpose models do not transfer to domain-specific medical LLMs, and that reliable alternatives exist.
Problem

Research questions and friction points this paper is trying to address.

prompt sensitivity
medical language models
chain-of-thought
robustness
prompt engineering
Innovation

Methods, ideas, or system contributions that make the work stand out.

prompt sensitivity
medical language models
chain-of-thought backfire
cloze scoring
permutation voting
๐Ÿ”Ž Similar Papers
No similar papers found.