🤖 AI Summary
Existing counterfactual explanation methods rely on handcrafted constraints or domain knowledge to model feature dependencies, resulting in poor generalizability, inability to capture nonlinear relationships, and lack of support for user preferences—often yielding causally implausible or infeasible explanations. To address these limitations, we propose RealAC, the first framework that automatically preserves complex feature dependencies without requiring domain expertise. RealAC aligns joint distributions of feature pairs to retain intrinsic data structure and introduces a user-controllable attribute freezing mechanism to enforce personalized feasibility constraints. Evaluated on three synthetic and two real-world datasets, RealAC significantly outperforms state-of-the-art methods and LLM-based baselines across causal edge score, dependency preservation, and realism metrics—achieving a unique balance among actionability, causal plausibility, and broad applicability.
📝 Abstract
Counterfactual explanations provide human-understandable reasoning for AI-made decisions by describing minimal changes to input features that would alter a model's prediction. To be truly useful in practice, such explanations must be realistic and feasible -- they should respect both the underlying data distribution and user-defined feasibility constraints. Existing approaches often enforce inter-feature dependencies through rigid, hand-crafted constraints or domain-specific knowledge, which limits their generalizability and ability to capture complex, nonlinear relations inherent in data. Moreover, they rarely accommodate user-specified preferences and suggest explanations that are causally implausible or infeasible to act upon. We introduce RealAC, a domain-agnostic framework for generating realistic and actionable counterfactuals. RealAC automatically preserves complex inter-feature dependencies without relying on explicit domain knowledge -- by aligning the joint distributions of feature pairs between factual and counterfactual instances. The framework also allows end-users to ``freeze'' attributes they cannot or do not wish to change by suppressing change in frozen features during optimization. Evaluations on three synthetic and two real datasets demonstrate that RealAC balances realism with actionability. Our method outperforms state-of-the-art baselines and Large Language Model-based counterfactual generation techniques in causal edge score, dependency preservation score, and IM1 realism metric and offers a solution for causality-aware and user-centric counterfactual generation.