Evaluating the performance and fragility of large language models on the self-assessment for neurological surgeons

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit semantic sensitivity to polysemy-induced interference in clinical neurosurgical assessment, yet their robustness remains uncharacterized. Method: We systematically evaluate six LLMs on the CNS-SANS examination (2,904 authentic questions), introducing a novel domain-specific vulnerability assessment framework that injects semantically confounding but non-clinical distractor statements to quantify interference resilience. Contribution/Results: Six models surpass the board-certification passing threshold, with the top performer exceeding it by 15.7%. Under semantic interference, average accuracy drops by up to 20.4%, revealing— for the first time—a critical “pass-to-fail” degradation threshold (one model fails outright). Open-weight models demonstrate significantly higher vulnerability than closed-weight counterparts. This work establishes the first empirical evidence of LLMs’ semantic fragility in high-stakes clinical evaluation, providing a new paradigm and benchmark for robustness assessment and safe deployment of medical foundation models.

Technology Category

Application Category

📝 Abstract
The Congress of Neurological Surgeons Self-Assessment for Neurological Surgeons (CNS-SANS) questions are widely used by neurosurgical residents to prepare for written board examinations. Recently, these questions have also served as benchmarks for evaluating large language models' (LLMs) neurosurgical knowledge. This study aims to assess the performance of state-of-the-art LLMs on neurosurgery board-like questions and to evaluate their robustness to the inclusion of distractor statements. A comprehensive evaluation was conducted using 28 large language models. These models were tested on 2,904 neurosurgery board examination questions derived from the CNS-SANS. Additionally, the study introduced a distraction framework to assess the fragility of these models. The framework incorporated simple, irrelevant distractor statements containing polysemous words with clinical meanings used in non-clinical contexts to determine the extent to which such distractions degrade model performance on standard medical benchmarks. 6 of the 28 tested LLMs achieved board-passing outcomes, with the top-performing models scoring over 15.7% above the passing threshold. When exposed to distractions, accuracy across various model architectures was significantly reduced-by as much as 20.4%-with one model failing that had previously passed. Both general-purpose and medical open-source models experienced greater performance declines compared to proprietary variants when subjected to the added distractors. While current LLMs demonstrate an impressive ability to answer neurosurgery board-like exam questions, their performance is markedly vulnerable to extraneous, distracting information. These findings underscore the critical need for developing novel mitigation strategies aimed at bolstering LLM resilience against in-text distractions, particularly for safe and effective clinical deployment.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' performance on neurosurgery board exam questions
Evaluating LLM robustness against distracting information in medical contexts
Identifying performance gaps between general and medical-specific LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated 28 LLMs on neurosurgery exam questions
Introduced distraction framework with irrelevant statements
Found significant performance drop due to distractions
🔎 Similar Papers
No similar papers found.
K
Krithik Vishwanath
Department of Neurological Surgery, NYU Langone Medical Center, New York, New York, USA; Aerospace Engineering & Engineering Mechanics, Mathematics, The University of Texas at Austin, Austin, Texas, USA
Anton Alyakin
Anton Alyakin
medical student at washington univesity
llmsneurosurgerynetworkscausality
M
Mrigayu Ghosh
Department of Neurological Surgery, NYU Langone Medical Center, New York, New York, USA; Biomedical Engineering, Molecular Biosciences, The University of Texas at Austin, Austin, Texas, USA
J
Jin Vivian Lee
Department of Neurological Surgery, NYU Langone Medical Center, New York, New York, USA; Department of Neurosurgery, Washington University School of Medicine in St. Louis, St. Louis, Missouri, USA
D
D. Alber
Department of Neurological Surgery, NYU Langone Medical Center, New York, New York, USA
Karl L. Sangwon
Karl L. Sangwon
Medical Student at NYU Grossman School of Medicine
NeurosurgeryApplied Math
D
Douglas Kondziolka
Department of Neurological Surgery, NYU Langone Medical Center, New York, New York, USA
E
Eric K. Oermann
Department of Neurological Surgery, Department of Radiology, NYU Langone Medical Center, New York, New York, USA; Center for Data Science, New York University, New York, New York, USA