Uncovering Bias Paths with LLM-guided Causal Discovery: An Active Learning and Dynamic Scoring Approach

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In realistic noisy and confounded environments, causal discovery often introduces spurious or biased paths, undermining fairness in machine learning. Method: This paper proposes the first large language model (LLM)-guided causal discovery framework, integrating LLM-derived semantic priors with statistical causal inference. It introduces a composite scoring function based on mutual information, partial correlation, and LLM confidence, coupled with breadth-first search featuring dynamic weight updating and an LLM-driven active causal querying mechanism that prioritizes fairness-critical variable pairs. Contribution/Results: Evaluated on a semi-synthetic fairness-oriented causal benchmark (an enhanced UCI Adult dataset), the method achieves up to a 37% improvement in recall of fairness-critical causal paths over state-of-the-art causal discovery methods under noise, label corruption, and latent confounding. It consistently attains superior global causal graph F1 scores and significantly enhances the reproducibility and robustness of bias auditing.

Technology Category

Application Category

📝 Abstract
Causal discovery (CD) plays a pivotal role in understanding the mechanisms underlying complex systems. While recent algorithms can detect spurious associations and latent confounding, many struggle to recover fairness-relevant pathways in realistic, noisy settings. Large Language Models (LLMs), with their access to broad semantic knowledge, offer a promising complement to statistical CD approaches, particularly in domains where metadata provides meaningful relational cues. Ensuring fairness in machine learning requires understanding how sensitive attributes causally influence outcomes, yet CD methods often introduce spurious or biased pathways. We propose a hybrid LLM-based framework for CD that extends a breadth-first search (BFS) strategy with active learning and dynamic scoring. Variable pairs are prioritized for LLM-based querying using a composite score based on mutual information, partial correlation, and LLM confidence, improving discovery efficiency and robustness. To evaluate fairness sensitivity, we construct a semi-synthetic benchmark from the UCI Adult dataset, embedding a domain-informed causal graph with injected noise, label corruption, and latent confounding. We assess how well CD methods recover both global structure and fairness-critical paths. Our results show that LLM-guided methods, including the proposed method, demonstrate competitive or superior performance in recovering such pathways under noisy conditions. We highlight when dynamic scoring and active querying are most beneficial and discuss implications for bias auditing in real-world datasets.
Problem

Research questions and friction points this paper is trying to address.

Identify fairness-relevant causal paths in noisy data
Combine LLMs with statistical methods for robust causal discovery
Improve bias detection in machine learning via dynamic scoring
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-guided causal discovery with active learning
Dynamic scoring using mutual information and LLM confidence
Hybrid BFS strategy for fairness-critical path recovery
🔎 Similar Papers
No similar papers found.
Khadija Zanna
Khadija Zanna
Rice University
AI SecurityHCIAffective computingDigital Health
A
Akane Sano
Department of Electrical and Computer Engineering, Rice University