🤖 AI Summary
This study investigates whether large language models (LLMs) exhibit cognitive flexibility in clinical reasoning, particularly their ability to avoid heuristic traps when confronted with medical questions designed to induce stereotyped thinking. Leveraging the Medical Abstraction and Reasoning Corpus (mARC), we constructed an adversarial medical question-answering task incorporating the Einstellung effect to systematically evaluate leading strong-reasoning LLMs, including those from OpenAI, Grok, Gemini, Claude, and DeepSeek. Results demonstrate that strong-reasoning models achieve human-level cognitive flexibility on mARC, correctly answering 55%–70% of questions—those most frequently missed by physicians—with high confidence, significantly outperforming weak-reasoning counterparts. This work provides the first empirical validation of LLMs’ robust resistance to the Einstellung effect in complex clinical reasoning scenarios.
📝 Abstract
Large Language Models (LLMs) have achieved high accuracy on medical question-answer (QA) benchmarks, yet their capacity for flexible clinical reasoning has been debated. Here, we asked whether advances in reasoning LLMs improve their cognitive flexibility in clinical reasoning. We assessed reasoning models from the OpenAI, Grok, Gemini, Claude, and DeepSeek families on the medicine abstraction and reasoning corpus (mARC), an adversarial medical QA benchmark which utilizes the Einstellung effect to induce inflexible overreliance on learned heuristic patterns in contexts where they become suboptimal. We found that strong reasoning models avoided Einstellung-based traps more often than weaker reasoning models, achieving human-level performance on mARC. On questions most commonly missed by physicians, the top 5 performing models answered 55% to 70% correctly with high confidence, indicating that these models may be less susceptible than humans to Einstellung effects. Our results indicate that strong reasoning models demonstrate improved flexibility in medical reasoning, achieving performance on par with humans on mARC.