🤖 AI Summary
This study addresses the insufficient accuracy of large language models (LLMs) on pharmacist licensure examination–style domain-specific question answering. We propose a lightweight, model-agnostic external retrieval-augmented generation (RAG) framework that requires no architectural modification or fine-tuning. Our method employs a three-stage semantic retrieval process over a structured pharmacological knowledge base to dynamically retrieve evidential passages, integrated with evidence-driven contextual prompt engineering for plug-and-play injection of authoritative knowledge. Key contributions include: (i) the first general-purpose RAG interface specifically designed for pharmacology, enabling seamless cross-model adaptation; and (ii) fully externalized integration, substantially narrowing the performance gap between small and large models. Evaluated on a 141-question pharmacology QA benchmark, our approach improves accuracy across 11 LLMs by 7–21 percentage points (e.g., Llama 3.1 8B achieves 67%), enabling compact models to approach the performance of top-tier proprietary models.
📝 Abstract
Objectives: To evaluate large language model (LLM) performance on pharmacy licensure-style question-answering (QA) tasks and develop an external knowledge integration method to improve their accuracy.
Methods: We benchmarked eleven existing LLMs with varying parameter sizes (8 billion to 70+ billion) using a 141-question pharmacy dataset. We measured baseline accuracy for each model without modification. We then developed a three-step retrieval-augmented generation (RAG) pipeline, DrugRAG, that retrieves structured drug knowledge from validated sources and augments model prompts with evidence-based context. This pipeline operates externally to the models, requiring no changes to model architecture or parameters.
Results: Baseline accuracy ranged from 46% to 92%, with GPT-5 (92%) and o3 (89%) achieving the highest scores. Models with fewer than 8 billion parameters scored below 50%. DrugRAG improved accuracy across all tested models, with gains ranging from 7 to 21 percentage points (e.g., Gemma 3 27B: 61% to 71%, Llama 3.1 8B: 46% to 67%) on the 141-item benchmark.
Conclusion: We demonstrate that external structured drug knowledge integration through DrugRAG measurably improves LLM accuracy on pharmacy tasks without modifying the underlying models. This approach provides a practical pipeline for enhancing pharmacy-focused AI applications with evidence-based information.