Benchmarking Biopharmaceuticals Retrieval-Augmented Generation Evaluation

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing QA evaluation metrics inadequately assess large language models’ (LLMs’) understanding of query–reference relationships (QRUC) in retrieval-augmented generation (RAG), particularly in the biopharmaceutical domain, where domain-specific, multilingual benchmarks are absent. Method: We introduce BRAGE—the first multilingual (English, French, German, Chinese) RAG benchmark tailored to biopharmaceutical QA—featuring a novel citation-based classification framework that overcomes limitations of conventional metrics in open-ended, retrieval-augmented settings. Contribution/Results: Through systematic evaluation of state-of-the-art LLMs, we identify substantial performance gaps in biopharmaceutical QRUC tasks. BRAGE provides a reproducible, quantitative, and empirically grounded benchmark to support the development, evaluation, and optimization of domain-specific RAG systems.

Technology Category

Application Category

📝 Abstract
Recently, the application of the retrieval-augmented Large Language Models (LLMs) in specific domains has gained significant attention, especially in biopharmaceuticals. However, in this context, there is no benchmark specifically designed for biopharmaceuticals to evaluate LLMs. In this paper, we introduce the Biopharmaceuticals Retrieval-Augmented Generation Evaluation (BRAGE) , the first benchmark tailored for evaluating LLMs' Query and Reference Understanding Capability (QRUC) in the biopharmaceutical domain, available in English, French, German and Chinese. In addition, Traditional Question-Answering (QA) metrics like accuracy and exact match fall short in the open-ended retrieval-augmented QA scenarios. To address this, we propose a citation-based classification method to evaluate the QRUC of LLMs to understand the relationship between queries and references. We apply this method to evaluate the mainstream LLMs on BRAGE. Experimental results show that there is a significant gap in the biopharmaceutical QRUC of mainstream LLMs, and their QRUC needs to be improved.
Problem

Research questions and friction points this paper is trying to address.

Lack of biopharmaceutical benchmark for LLM evaluation
Inadequate traditional QA metrics for open-ended scenarios
Need to improve LLMs' query-reference understanding capability
Innovation

Methods, ideas, or system contributions that make the work stand out.

First biopharmaceutical benchmark BRAGE for LLMs
Citation-based QRUC evaluation method proposed
Multilingual support in English, French, German, Chinese
🔎 Similar Papers
No similar papers found.
H
Hanmeng Zhong
PatSnap Co., LTD.
Linqing Chen
Linqing Chen
Patsnap
W
Weilei Wang
PatSnap Co., LTD.
W
Wentao Wu
PatSnap Co., LTD.