🤖 AI Summary
Existing biomedical reasoning benchmarks suffer from limited linguistic diversity and shallow conceptual coverage, hindering progress in medical reasoning research. To address this, we introduce MedQA-ES/EN—the first bilingual (Spanish/English) multiple-choice medical reasoning benchmark—comprising over 12,000 high-quality questions drawn from ten years of professional medical licensing examinations. We systematically evaluate state-of-the-art reasoning strategies, including prompt engineering, retrieval-augmented generation (RAG), and probabilistic answer selection, across multiple open-source large language models. Our empirical analysis reveals that advanced reasoning techniques yield only marginal gains; instead, model parameter count remains the dominant factor governing performance. This work fills a critical gap in multilingual medical reasoning evaluation and provides a reproducible, extensible benchmarking framework. By establishing rigorous, linguistically diverse evaluation standards, MedQA-ES/EN serves as a reliable foundation for future research in cross-lingual biomedical reasoning.
📝 Abstract
We introduce HEAD-QA v2, an expanded and updated version of a Spanish/English healthcare multiple-choice reasoning dataset originally released by Vilares and G'omez-Rodr'iguez (2019). The update responds to the growing need for high-quality datasets that capture the linguistic and conceptual complexity of healthcare reasoning. We extend the dataset to over 12,000 questions from ten years of Spanish professional exams, benchmark several open-source LLMs using prompting, RAG, and probability-based answer selection, and provide additional multilingual versions to support future work. Results indicate that performance is mainly driven by model scale and intrinsic reasoning ability, with complex inference strategies obtaining limited gains. Together, these results establish HEAD-QA v2 as a reliable resource for advancing research on biomedical reasoning and model improvement.