ZKP-FedEval: Verifiable and Privacy-Preserving Federated Evaluation using Zero-Knowledge Proofs

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning (FL), sharing performance metrics during evaluation may leak sensitive information. To address this, this work introduces zero-knowledge proofs (ZKPs) into the FL evaluation phase for the first time, proposing a verifiable privacy-preserving evaluation protocol that requires no trusted third party and relies neither on external APIs nor on raw data disclosure. The protocol employs a threshold-based verification circuit integrated with an FL simulation module, enabling secure validation of loss values and classification accuracy for CNN and MLP models on MNIST and HAR datasets—without revealing original data or intermediate loss values. Experimental results demonstrate that the scheme achieves strict privacy guarantees while incurring low communication overhead and controllable computational cost. Its feasibility, practicality, and scalability are empirically validated.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) enables collaborative model training on decentralized data without exposing raw data. However, the evaluation phase in FL may leak sensitive information through shared performance metrics. In this paper, we propose a novel protocol that incorporates Zero-Knowledge Proofs (ZKPs) to enable privacy-preserving and verifiable evaluation for FL. Instead of revealing raw loss values, clients generate a succinct proof asserting that their local loss is below a predefined threshold. Our approach is implemented without reliance on external APIs, using self-contained modules for federated learning simulation, ZKP circuit design, and experimental evaluation on both the MNIST and Human Activity Recognition (HAR) datasets. We focus on a threshold-based proof for a simple Convolutional Neural Network (CNN) model (for MNIST) and a multi-layer perceptron (MLP) model (for HAR), and evaluate the approach in terms of computational overhead, communication cost, and verifiability.
Problem

Research questions and friction points this paper is trying to address.

Prevent sensitive data leakage in federated learning evaluations
Enable verifiable performance assessment without raw data sharing
Reduce computational and communication costs in privacy-preserving FL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Zero-Knowledge Proofs for privacy
Self-contained FL simulation modules
Threshold-based proof for CNN/MLP
🔎 Similar Papers
No similar papers found.
Daniel Commey
Daniel Commey
Texas A&M University
CybersecurityBlockchain & IoTMachine LearningFinTech & AIS SecurityCloud Security
B
Benjamin Appiah
Department of Computer Science, Ho Technical University, Ho, Volta Region, Ghana
G
Griffith S. Klogo
Department of Computer Engineering, Kwame Nkrumah University of Science and Technology (KNUST), Kumasi, Ghana
Garth V. Crosby
Garth V. Crosby
ETID, Texas A & M University College Station
Network SecurityWireless Sensor NetworksInternet of ThingsCyber-physical Systems Security