Spurious Correlations and Beyond: Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies shortcut learning in large language models (LLMs) for social determinants of health (SDOH) extraction—particularly substance use status—from clinical text: models over-rely on superficial lexical cues (e.g., “alcohol”, “smoking”), yielding spurious predictions (e.g., misclassifying current vs. past use) and exhibiting pronounced gender performance disparities. Leveraging the MIMIC-SHAC dataset, we conduct the first systematic diagnosis of spurious correlations and latent gender bias in SDOH extraction. We propose a medical trustworthiness–oriented mitigation framework integrating prompt optimization and chain-of-thought (CoT) reasoning. Experiments demonstrate that our approach significantly reduces false-positive rates and narrows gender-based performance gaps. The framework establishes a reproducible, generalizable paradigm for robustness evaluation and bias mitigation in clinical LLMs, advancing reliability and fairness in healthcare AI applications.

Technology Category

Application Category

📝 Abstract
Social determinants of health (SDOH) extraction from clinical text is critical for downstream healthcare analytics. Although large language models (LLMs) have shown promise, they may rely on superficial cues leading to spurious predictions. Using the MIMIC portion of the SHAC (Social History Annotation Corpus) dataset and focusing on drug status extraction as a case study, we demonstrate that mentions of alcohol or smoking can falsely induce models to predict current/past drug use where none is present, while also uncovering concerning gender disparities in model performance. We further evaluate mitigation strategies - such as prompt engineering and chain-of-thought reasoning - to reduce these false positives, providing insights into enhancing LLM reliability in health domains.
Problem

Research questions and friction points this paper is trying to address.

Identifying and reducing spurious correlations in SDOH extraction
Addressing shortcut learning in LLMs for clinical text analysis
Mitigating false positives and gender biases in drug status predictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using SHAC dataset for SDOH extraction
Applying prompt engineering to reduce errors
Employing chain-of-thought reasoning for reliability
🔎 Similar Papers
No similar papers found.