🤖 AI Summary
This study systematically evaluates the domain-specific reasoning capabilities of large language models (LLMs) in anesthesiology—a critical yet underexplored medical specialty.
Method: We introduce AnesBench, the first cross-lingual, multi-level benchmark for anesthesiology reasoning, encompassing factual retrieval, hybrid reasoning, and complex clinical decision-making tasks. We propose a domain-specific, multi-tiered evaluation framework and a novel System 1.x hybrid reasoning paradigm. Additionally, we publicly release high-quality bilingual (Chinese–English) datasets alongside resources for continual pretraining (CPT) and supervised fine-tuning (SFT).
Contribution/Results: Empirical analysis reveals a nonlinear relationship between model scale and chain-of-thought length; CPT+SFT significantly improves domain accuracy; and inference strategies—including Best-of-N sampling and beam search—enhance decision robustness. Collectively, this work establishes a methodological foundation and empirical evidence base for evaluating and optimizing medical LLMs.
📝 Abstract
The application of large language models (LLMs) in the medical field has gained significant attention, yet their reasoning capabilities in more specialized domains like anesthesiology remain underexplored. In this paper, we systematically evaluate the reasoning capabilities of LLMs in anesthesiology and analyze key factors influencing their performance. To this end, we introduce AnesBench, a cross-lingual benchmark designed to assess anesthesiology-related reasoning across three levels: factual retrieval (System 1), hybrid reasoning (System 1.x), and complex decision-making (System 2). Through extensive experiments, we first explore how model characteristics, including model scale, Chain of Thought (CoT) length, and language transferability, affect reasoning performance. Then, we further evaluate the effectiveness of different training strategies, leveraging our curated anesthesiology-related dataset, including continuous pre-training (CPT) and supervised fine-tuning (SFT). Additionally, we also investigate how the test-time reasoning techniques, such as Best-of-N sampling and beam search, influence reasoning performance, and assess the impact of reasoning-enhanced model distillation, specifically DeepSeek-R1. We will publicly release AnesBench, along with our CPT and SFT training datasets and evaluation code at https://github.com/MiliLab/AnesBench.