Beyond Verification: Abductive Explanations for Post-AI Assessment of Privacy Leakage

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address privacy leakage risks arising from adversarial reverse inference of sensitive attributes in AI decision-making, this paper proposes the first abductive reasoning–based post-hoc privacy auditing framework. The framework employs formal logical modeling to generate minimal sufficient evidence identifying sensitive features upon which decisions critically depend, and introduces “Potentially Applicable Explanations” (PAEs)—an actionable, individual-centric privacy protection mechanism. It is the first work to systematically integrate abductive explanation into privacy assessment, unifying individual-level and system-level leakage analysis while jointly ensuring interpretability and privacy guarantees. Experiments on the German Credit dataset demonstrate that the framework precisely localizes privacy leakage pathways, quantifies the influence of sensitive features, and significantly enhances decision transparency and privacy controllability. This work establishes a novel paradigm for synergizing explainable AI with privacy-preserving design.

Technology Category

Application Category

📝 Abstract
Privacy leakage in AI-based decision processes poses significant risks, particularly when sensitive information can be inferred. We propose a formal framework to audit privacy leakage using abductive explanations, which identifies minimal sufficient evidence justifying model decisions and determines whether sensitive information disclosed. Our framework formalizes both individual and system-level leakage, introducing the notion of Potentially Applicable Explanations (PAE) to identify individuals whose outcomes can shield those with sensitive features. This approach provides rigorous privacy guarantees while producing human understandable explanations, a key requirement for auditing tools. Experimental evaluation on the German Credit Dataset illustrates how the importance of sensitive literal in the model decision process affects privacy leakage. Despite computational challenges and simplifying assumptions, our results demonstrate that abductive reasoning enables interpretable privacy auditing, offering a practical pathway to reconcile transparency, model interpretability, and privacy preserving in AI decision-making.
Problem

Research questions and friction points this paper is trying to address.

Auditing privacy leakage in AI decisions using abductive explanations
Identifying minimal evidence justifying model decisions and sensitive disclosures
Formalizing individual and system-level leakage with interpretable privacy guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Abductive explanations audit privacy leakage
Framework formalizes individual and system-level leakage
Potentially Applicable Explanations identify sensitive feature shielding
🔎 Similar Papers
No similar papers found.