BPQA Dataset: Evaluating How Well Language Models Leverage Blood Pressures to Answer Biomedical Questions

📅 2025-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates large language models’ (LLMs) capacity to comprehend and reason with clinical measurement data—specifically blood pressure (BP). To this end, we introduce BPQA, the first BP-focused medical question-answering benchmark, comprising 100 physician-validated question-answer pairs. We conduct systematic evaluations across BERT, BioBERT, MedAlpaca, and GPT-3.5. Our results demonstrate, for the first time quantitatively, that mainstream LLMs can effectively integrate BP values into clinical reasoning: GPT-3.5 and MedAlpaca achieve notably superior performance. Moreover, structured (i.e., normalized and annotated) BP representations improve accuracy by up to 12.3% for BioBERT and MedAlpaca. Crucially, retrieval-augmented inference is shown to significantly enhance domain-specific models’ ability to leverage numerical clinical measurements. This work establishes both a novel benchmark and a methodological foundation for measurement-driven medical QA, advancing the integration of quantitative physiological data into LLM-based clinical decision support.

Technology Category

Application Category

📝 Abstract
Clinical measurements such as blood pressures and respiration rates are critical in diagnosing and monitoring patient outcomes. It is an important component of biomedical data, which can be used to train transformer-based language models (LMs) for improving healthcare delivery. It is, however, unclear whether LMs can effectively interpret and use clinical measurements. We investigate two questions: First, can LMs effectively leverage clinical measurements to answer related medical questions? Second, how to enhance an LM's performance on medical question-answering (QA) tasks that involve measurements? We performed a case study on blood pressure readings (BPs), a vital sign routinely monitored by medical professionals. We evaluated the performance of four LMs: BERT, BioBERT, MedAlpaca, and GPT-3.5, on our newly developed dataset, BPQA (Blood Pressure Question Answering). BPQA contains $100$ medical QA pairs that were verified by medical students and designed to rely on BPs . We found that GPT-3.5 and MedAlpaca (larger and medium sized LMs) benefit more from the inclusion of BPs than BERT and BioBERT (small sized LMs). Further, augmenting measurements with labels improves the performance of BioBERT and Medalpaca (domain specific LMs), suggesting that retrieval may be useful for improving domain-specific LMs.
Problem

Research questions and friction points this paper is trying to address.

Evaluate LMs' ability to use clinical measurements for medical QA.
Enhance LM performance on medical QA tasks involving measurements.
Assess impact of blood pressure data on LM effectiveness in healthcare.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes transformer-based language models for medical QA.
Evaluates LMs' ability to interpret clinical measurements.
Augments measurements with labels to enhance LM performance.
🔎 Similar Papers
No similar papers found.
C
Chi Hang
NYU Center for Data Science, NYU Langone Health
R
Ruiqi Deng
NYU Center for Data Science, NYU Langone Health
Lavender Yao Jiang
Lavender Yao Jiang
New York University
NLPHealthcareLanguage ModelPrivacy
Zihao Yang
Zihao Yang
New York University
Natural Language Processing
Anton Alyakin
Anton Alyakin
medical student at washington univesity
llmsneurosurgerynetworkscausality
D
D. Alber
NYU Grossman School of Medicine, NYU Langone Health
E
E. Oermann
NYU Grossman School of Medicine, NYU Langone Health