🤖 AI Summary
Large language models (LLMs) exhibit semantic sensitivity to polysemy-induced interference in clinical neurosurgical assessment, yet their robustness remains uncharacterized. Method: We systematically evaluate six LLMs on the CNS-SANS examination (2,904 authentic questions), introducing a novel domain-specific vulnerability assessment framework that injects semantically confounding but non-clinical distractor statements to quantify interference resilience. Contribution/Results: Six models surpass the board-certification passing threshold, with the top performer exceeding it by 15.7%. Under semantic interference, average accuracy drops by up to 20.4%, revealing— for the first time—a critical “pass-to-fail” degradation threshold (one model fails outright). Open-weight models demonstrate significantly higher vulnerability than closed-weight counterparts. This work establishes the first empirical evidence of LLMs’ semantic fragility in high-stakes clinical evaluation, providing a new paradigm and benchmark for robustness assessment and safe deployment of medical foundation models.
📝 Abstract
The Congress of Neurological Surgeons Self-Assessment for Neurological Surgeons (CNS-SANS) questions are widely used by neurosurgical residents to prepare for written board examinations. Recently, these questions have also served as benchmarks for evaluating large language models' (LLMs) neurosurgical knowledge. This study aims to assess the performance of state-of-the-art LLMs on neurosurgery board-like questions and to evaluate their robustness to the inclusion of distractor statements. A comprehensive evaluation was conducted using 28 large language models. These models were tested on 2,904 neurosurgery board examination questions derived from the CNS-SANS. Additionally, the study introduced a distraction framework to assess the fragility of these models. The framework incorporated simple, irrelevant distractor statements containing polysemous words with clinical meanings used in non-clinical contexts to determine the extent to which such distractions degrade model performance on standard medical benchmarks. 6 of the 28 tested LLMs achieved board-passing outcomes, with the top-performing models scoring over 15.7% above the passing threshold. When exposed to distractions, accuracy across various model architectures was significantly reduced-by as much as 20.4%-with one model failing that had previously passed. Both general-purpose and medical open-source models experienced greater performance declines compared to proprietary variants when subjected to the added distractors. While current LLMs demonstrate an impressive ability to answer neurosurgery board-like exam questions, their performance is markedly vulnerable to extraneous, distracting information. These findings underscore the critical need for developing novel mitigation strategies aimed at bolstering LLM resilience against in-text distractions, particularly for safe and effective clinical deployment.