🤖 AI Summary
This study identifies a systemic fairness risk in large language models (LLMs) applied to drug safety prediction: LLMs inappropriately rely on sociodemographic attributes—such as education level and housing stability—that are non-clinical and socially sensitive, leading to inflated adverse event (AE) risk estimates for vulnerable populations. To address this, we propose a persona-based evaluation framework that distinguishes explicit from implicit bias patterns. Using structured FAERS data and two LLMs—ChatGPT-4o and Bio-Medical-Llama-3.8B—we conduct multi-role, multi-dimensional persona-driven reasoning analyses. Our work provides the first empirical evidence that LLMs erroneously associate sociodemographic labels with AE probability, significantly compromising predictive fairness. The contribution includes a reproducible, persona-grounded fairness assessment paradigm and actionable debiasing pathways for trustworthy AI deployment in pharmacoepidemiology.
📝 Abstract
Large language models (LLMs) are increasingly applied in biomedical domains, yet their reliability in drug-safety prediction remains underexplored. In this work, we investigate whether LLMs incorporate socio-demographic information into adverse event (AE) predictions, despite such attributes being clinically irrelevant. Using structured data from the United States Food and Drug Administration Adverse Event Reporting System (FAERS) and a persona-based evaluation framework, we assess two state-of-the-art models, ChatGPT-4o and Bio-Medical-Llama-3.8B, across diverse personas defined by education, marital status, employment, insurance, language, housing stability, and religion. We further evaluate performance across three user roles (general practitioner, specialist, patient) to reflect real-world deployment scenarios where commercial systems often differentiate access by user type. Our results reveal systematic disparities in AE prediction accuracy. Disadvantaged groups (e.g., low education, unstable housing) were frequently assigned higher predicted AE likelihoods than more privileged groups (e.g., postgraduate-educated, privately insured). Beyond outcome disparities, we identify two distinct modes of bias: explicit bias, where incorrect predictions directly reference persona attributes in reasoning traces, and implicit bias, where predictions are inconsistent, yet personas are not explicitly mentioned. These findings expose critical risks in applying LLMs to pharmacovigilance and highlight the urgent need for fairness-aware evaluation protocols and mitigation strategies before clinical deployment.