🤖 AI Summary
This work addresses the challenge of automatically evaluating the factual consistency, terminological accuracy, and expert credibility of answers generated by large language models in biomedical question answering, along with their cited references. To this end, we propose BioACE, a novel framework that systematically integrates multidimensional answer quality metrics—completeness, correctness, precision, and recall—with an assessment of citation evidence reliability, forming an end-to-end automated evaluation pipeline. BioACE leverages natural language inference, pretrained language models, and a fact-nugget–based alignment mechanism to jointly evaluate both answers and their supporting citations. Experimental results demonstrate that BioACE’s components exhibit strong correlation with human judgments, substantially reducing reliance on expert review. The framework is released as an open-source, reusable evaluation toolkit.
📝 Abstract
With the increasing use of large language models (LLMs) for generating answers to biomedical questions, it is crucial to evaluate the quality of the generated answers and the references provided to support the facts in the generated answers. Evaluation of text generated by LLMs remains a challenge for question answering, retrieval-augmented generation (RAG), summarization, and many other natural language processing tasks in the biomedical domain, due to the requirements of expert assessment to verify consistency with the scientific literature and complex medical terminology. In this work, we propose BioACE, an automated framework for evaluating biomedical answers and citations against the facts stated in the answers. The proposed BioACE framework considers multiple aspects, including completeness, correctness, precision, and recall, in relation to the ground-truth nuggets for answer evaluation. We developed automated approaches to evaluate each of the aforementioned aspects and performed extensive experiments to assess and analyze their correlation with human evaluations. In addition, we considered multiple existing approaches, such as natural language inference (NLI) and pre-trained language models and LLMs, to evaluate the quality of evidence provided to support the generated answers in the form of citations into biomedical literature. With the detailed experiments and analysis, we provide the best approaches for biomedical answer and citation evaluation as a part of BioACE (https://github.com/deepaknlp/BioACE) evaluation package.