P2NIA: Privacy-Preserving Non-Iterative Auditing

📅 2025-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Ethical auditing of high-risk AI systems faces two critical challenges: (1) privacy leakage risks arising from auditors’ reliance on platform APIs, and (2) biased fairness assessments due to auditors’ inability to access raw training or inference data. Method: We propose a non-iterative, privacy-preserving collaborative auditing paradigm that eliminates both real-data sharing and repeated API queries. Our end-to-end protocol integrates differentially private synthetic data generation, local model training, and statistical inference. Contribution/Results: Experiments demonstrate significant reductions in platform API maintenance overhead and privacy risk; fairness metric estimation bias decreases by over 40%, while accuracy substantially improves. To our knowledge, this is the first framework enabling mutually beneficial collaboration between platforms and auditors—offering a scalable, verifiable, and privacy-safe auditing pathway compliant with emerging AI governance standards.

Technology Category

Application Category

📝 Abstract
The emergence of AI legislation has increased the need to assess the ethical compliance of high-risk AI systems. Traditional auditing methods rely on platforms' application programming interfaces (APIs), where responses to queries are examined through the lens of fairness requirements. However, such approaches put a significant burden on platforms, as they are forced to maintain APIs while ensuring privacy, facing the possibility of data leaks. This lack of proper collaboration between the two parties, in turn, causes a significant challenge to the auditor, who is subject to estimation bias as they are unaware of the data distribution of the platform. To address these two issues, we present P2NIA, a novel auditing scheme that proposes a mutually beneficial collaboration for both the auditor and the platform. Extensive experiments demonstrate P2NIA's effectiveness in addressing both issues. In summary, our work introduces a privacy-preserving and non-iterative audit scheme that enhances fairness assessments using synthetic or local data, avoiding the challenges associated with traditional API-based audits.
Problem

Research questions and friction points this paper is trying to address.

Ensures ethical compliance of high-risk AI systems
Reduces privacy risks in traditional API-based audits
Minimizes estimation bias without data distribution knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Privacy-preserving non-iterative auditing scheme
Uses synthetic or local data
Avoids traditional API-based audits
🔎 Similar Papers
No similar papers found.
J
Jade Garcia Bourr'ee
Univ Rennes, Inria, CNRS, IRISA
H
Hadrien Lautraite
University du Québec à Montréal
S
S'ebastien Gambs
University du Québec à Montréal
G
Gilles Tredan
LAAS-CNRS
Erwan Le Merrer
Erwan Le Merrer
Inria researcher, Rennes (Univ Rennes, Inria, CNRS, IRISA)
network scienceaudit algorithms
B
Benoit Rottembourg
Inria