🤖 AI Summary
To address the weak instruction-following capability and inefficient human preference alignment of medical large language models (LLMs), this paper introduces CMedINS—a high-quality medical instruction dataset—and proposes a lightweight DPO-based preference alignment framework. We establish, for the first time, a fine-grained six-category instruction taxonomy grounded in authentic clinical scenarios and design a medical-context-aware DPO variant that bypasses the complexity of reinforcement learning pipelines. Leveraging instruction tuning, collaborative training on diverse medical data sources, and rigorous filtering, our model achieves significant improvements over existing baselines on medical dialogue tasks, demonstrating superior clinical instruction comprehension, safety-aware response generation, and domain-specific expertise. All models, source code, and datasets will be publicly released.
📝 Abstract
Recent researches of large language models(LLM), which is pre-trained on massive general-purpose corpora, have achieved breakthroughs in responding human queries. However, these methods face challenges including limited data insufficiency to support extensive pre-training and can not align responses with users' instructions. To address these issues, we introduce a medical instruction dataset, CMedINS, containing six medical instructions derived from actual medical tasks, which effectively fine-tunes LLM in conjunction with other data. Subsequently, We launch our medical model, IIMedGPT, employing an efficient preference alignment method, Direct preference Optimization(DPO). The results show that our final model outperforms existing medical models in medical dialogue.Datsets, Code and model checkpoints will be released upon acceptance.