ControlMed: Adding Reasoning Control to Medical Language Model

📅 2025-07-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing medical large language models (LLMs) suffer from excessively long reasoning chains, resulting in high computational overhead and significant response latency—key bottlenecks for clinical deployment. To address this, we propose the first medical-domain-specific framework enabling dynamic control of reasoning length during inference. Our method introduces fine-grained, explicit control tokens that allow users to flexibly specify the desired number of reasoning steps, thereby balancing accuracy and efficiency. We adopt a three-stage training paradigm: (1) medical instruction pretraining, (2) multi-length supervised fine-tuning with human-annotated reasoning traces, and (3) model self-feedback reinforcement learning guided by factual consistency and interpretability metrics. Extensive experiments across English and Korean medical benchmarks demonstrate state-of-the-art performance, achieving an average 37% reduction in inference latency while improving factual accuracy and decision interpretability—validating the framework’s efficacy and practicality in real-world clinical settings.

Technology Category

Application Category

📝 Abstract
Reasoning Large Language Models (LLMs) with enhanced accuracy and explainability are increasingly being adopted in the medical domain, as the life-critical nature of clinical decision-making demands reliable support. Despite these advancements, existing reasoning LLMs often generate unnecessarily lengthy reasoning processes, leading to significant computational overhead and response latency. These limitations hinder their practical deployment in real-world clinical environments. To address these challenges, we introduce extbf{ControlMed}, a medical language model that enables users to actively control the length of the reasoning process at inference time through fine-grained control markers. ControlMed is trained through a three-stage pipeline: 1) pre-training on a large-scale synthetic medical instruction dataset covering both extit{direct} and extit{reasoning responses}; 2) supervised fine-tuning with multi-length reasoning data and explicit length-control markers; and 3) reinforcement learning with model-based reward signals to enhance factual accuracy and response quality. Experimental results on a variety of English and Korean medical benchmarks demonstrate that our model achieves similar or better performance compared to state-of-the-art models. Furthermore, users can flexibly balance reasoning accuracy and computational efficiency by controlling the reasoning length as needed. These findings demonstrate that ControlMed is a practical and adaptable solution for clinical question answering and medical information analysis.
Problem

Research questions and friction points this paper is trying to address.

Control reasoning length in medical LLMs to reduce overhead
Improve computational efficiency in clinical decision-making
Enhance accuracy and flexibility in medical question answering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained control markers for reasoning length
Three-stage training pipeline for accuracy
Balances reasoning accuracy and computational efficiency
S
Sung-Min Lee
Agentic AI Lab, KT
S
Siyoon Lee
Agentic AI Lab, KT
Juyeon Kim
Juyeon Kim
KAIST AI
Multimodal LearningInformation RetrievalLarge Language Models
K
Kyungmin Roh
Agentic AI Lab, KT