🤖 AI Summary
This work identifies shortcut learning in large language models (LLMs) for social determinants of health (SDOH) extraction—particularly substance use status—from clinical text: models over-rely on superficial lexical cues (e.g., “alcohol”, “smoking”), yielding spurious predictions (e.g., misclassifying current vs. past use) and exhibiting pronounced gender performance disparities. Leveraging the MIMIC-SHAC dataset, we conduct the first systematic diagnosis of spurious correlations and latent gender bias in SDOH extraction. We propose a medical trustworthiness–oriented mitigation framework integrating prompt optimization and chain-of-thought (CoT) reasoning. Experiments demonstrate that our approach significantly reduces false-positive rates and narrows gender-based performance gaps. The framework establishes a reproducible, generalizable paradigm for robustness evaluation and bias mitigation in clinical LLMs, advancing reliability and fairness in healthcare AI applications.
📝 Abstract
Social determinants of health (SDOH) extraction from clinical text is critical for downstream healthcare analytics. Although large language models (LLMs) have shown promise, they may rely on superficial cues leading to spurious predictions. Using the MIMIC portion of the SHAC (Social History Annotation Corpus) dataset and focusing on drug status extraction as a case study, we demonstrate that mentions of alcohol or smoking can falsely induce models to predict current/past drug use where none is present, while also uncovering concerning gender disparities in model performance. We further evaluate mitigation strategies - such as prompt engineering and chain-of-thought reasoning - to reduce these false positives, providing insights into enhancing LLM reliability in health domains.