OATH: Efficient and Flexible Zero-Knowledge Proofs of End-to-End ML Fairness

📅 2024-09-17
🏛️ arXiv.org
📈 Citations: 5
Influential: 1
📄 PDF
🤖 AI Summary
To address the challenge of online verifiability of model fairness in AI services, this paper proposes ZKPoF, the first deployment-oriented zero-knowledge fairness verification framework. ZKPoF supports arbitrary score-based classifiers while preserving confidentiality of both model parameters and client data, enabling end-to-end secure verification across training, inference, and auditing phases. Leveraging zk-SNARKs, composable cryptographic protocols, and modular proof construction, ZKPoF reduces offline audit overhead to constant time and achieves, for the first time, efficient fairness proofs for DNNs with up to ten million parameters. Compared to prior work, it accelerates proof generation by 1343×, incurs client communication costs comparable to plaintext MLaaS, and provides strong security against malicious adversaries—thereby jointly achieving practical efficiency and rigorous cryptographic guarantees.

Technology Category

Application Category

📝 Abstract
Though there is much interest in fair AI systems, the problem of fairness noncompliance -- which concerns whether fair models are used in practice -- has received lesser attention. Zero-Knowledge Proofs of Fairness (ZKPoF) address fairness noncompliance by allowing a service provider to verify to external parties that their model serves diverse demographics equitably, with guaranteed confidentiality over proprietary model parameters and data. They have great potential for building public trust and effective AI regulation, but no previous techniques for ZKPoF are fit for real-world deployment. We present OATH, the first ZKPoF framework that is (i) deployably efficient with client-facing communication comparable to in-the-clear ML as a Service query answering, and an offline audit phase that verifies an asymptotically constant quantity of answered queries, (ii) deployably flexible with modularity for any score-based classifier given a zero-knowledge proof of correct inference, (iii) deployably secure with an end-to-end security model that guarantees confidentiality and fairness across training, inference, and audits. We show that OATH obtains strong robustness against malicious adversaries at concretely efficient parameter settings. Notably, OATH provides a 1343x improvement to runtime over previous work for neural network ZKPoF, and scales up to much larger models -- even DNNs with tens of millions of parameters.
Problem

Research questions and friction points this paper is trying to address.

Verifying model fairness confidentially in black-box ML services
Addressing unreliability of static fairness certificates under distribution shifts
Overcoming scalability limitations of cryptographic fairness verification methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online fairness certificates verify model fairness during deployment
OATH uses zero-knowledge proofs for confidential fairness certification
Cut-and-choose protocol enables scalable group fairness verification
🔎 Similar Papers
No similar papers found.
O
Olive Franzese
Northwestern University
A
A. Shamsabadi
Brave Software
H
Hamed Haddadi
Brave Software, Imperial College London