BioACE: An Automated Framework for Biomedical Answer and Citation Evaluations

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of automatically evaluating the factual consistency, terminological accuracy, and expert credibility of answers generated by large language models in biomedical question answering, along with their cited references. To this end, we propose BioACE, a novel framework that systematically integrates multidimensional answer quality metrics—completeness, correctness, precision, and recall—with an assessment of citation evidence reliability, forming an end-to-end automated evaluation pipeline. BioACE leverages natural language inference, pretrained language models, and a fact-nugget–based alignment mechanism to jointly evaluate both answers and their supporting citations. Experimental results demonstrate that BioACE’s components exhibit strong correlation with human judgments, substantially reducing reliance on expert review. The framework is released as an open-source, reusable evaluation toolkit.

Technology Category

Application Category

📝 Abstract
With the increasing use of large language models (LLMs) for generating answers to biomedical questions, it is crucial to evaluate the quality of the generated answers and the references provided to support the facts in the generated answers. Evaluation of text generated by LLMs remains a challenge for question answering, retrieval-augmented generation (RAG), summarization, and many other natural language processing tasks in the biomedical domain, due to the requirements of expert assessment to verify consistency with the scientific literature and complex medical terminology. In this work, we propose BioACE, an automated framework for evaluating biomedical answers and citations against the facts stated in the answers. The proposed BioACE framework considers multiple aspects, including completeness, correctness, precision, and recall, in relation to the ground-truth nuggets for answer evaluation. We developed automated approaches to evaluate each of the aforementioned aspects and performed extensive experiments to assess and analyze their correlation with human evaluations. In addition, we considered multiple existing approaches, such as natural language inference (NLI) and pre-trained language models and LLMs, to evaluate the quality of evidence provided to support the generated answers in the form of citations into biomedical literature. With the detailed experiments and analysis, we provide the best approaches for biomedical answer and citation evaluation as a part of BioACE (https://github.com/deepaknlp/BioACE) evaluation package.
Problem

Research questions and friction points this paper is trying to address.

biomedical answer evaluation
citation evaluation
large language models
automated evaluation
retrieval-augmented generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

BioACE
automated evaluation
biomedical question answering
citation assessment
retrieval-augmented generation
🔎 Similar Papers
No similar papers found.
Deepak Gupta
Deepak Gupta
National Library of Medicine (NLM) - National Institutes of Health (NIH), USA
Consumer Health Question AnsweringMulti-modal Question AnsweringCode-Mixing
D
Davis Bartels
Division of Intramural Research, National Library of Medicine, 8600 Rockville Pike, Bethesda, 20894, MD, USA
D
Dina Demner-Fuhsman
Division of Intramural Research, National Library of Medicine, 8600 Rockville Pike, Bethesda, 20894, MD, USA