🤖 AI Summary
In realistic noisy and confounded environments, causal discovery often introduces spurious or biased paths, undermining fairness in machine learning.
Method: This paper proposes the first large language model (LLM)-guided causal discovery framework, integrating LLM-derived semantic priors with statistical causal inference. It introduces a composite scoring function based on mutual information, partial correlation, and LLM confidence, coupled with breadth-first search featuring dynamic weight updating and an LLM-driven active causal querying mechanism that prioritizes fairness-critical variable pairs.
Contribution/Results: Evaluated on a semi-synthetic fairness-oriented causal benchmark (an enhanced UCI Adult dataset), the method achieves up to a 37% improvement in recall of fairness-critical causal paths over state-of-the-art causal discovery methods under noise, label corruption, and latent confounding. It consistently attains superior global causal graph F1 scores and significantly enhances the reproducibility and robustness of bias auditing.
📝 Abstract
Causal discovery (CD) plays a pivotal role in understanding the mechanisms underlying complex systems. While recent algorithms can detect spurious associations and latent confounding, many struggle to recover fairness-relevant pathways in realistic, noisy settings. Large Language Models (LLMs), with their access to broad semantic knowledge, offer a promising complement to statistical CD approaches, particularly in domains where metadata provides meaningful relational cues. Ensuring fairness in machine learning requires understanding how sensitive attributes causally influence outcomes, yet CD methods often introduce spurious or biased pathways. We propose a hybrid LLM-based framework for CD that extends a breadth-first search (BFS) strategy with active learning and dynamic scoring. Variable pairs are prioritized for LLM-based querying using a composite score based on mutual information, partial correlation, and LLM confidence, improving discovery efficiency and robustness. To evaluate fairness sensitivity, we construct a semi-synthetic benchmark from the UCI Adult dataset, embedding a domain-informed causal graph with injected noise, label corruption, and latent confounding. We assess how well CD methods recover both global structure and fairness-critical paths. Our results show that LLM-guided methods, including the proposed method, demonstrate competitive or superior performance in recovering such pathways under noisy conditions. We highlight when dynamic scoring and active querying are most beneficial and discuss implications for bias auditing in real-world datasets.