Kick Bad Guys Out! Conditionally Activated Anomaly Detection in Federated Learning with Zero-Knowledge Proof Verification

📅 2023-10-06
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning (FL) is vulnerable to model poisoning and backdoor attacks; existing defenses often rely on strong assumptions, degrade aggregation accuracy, or impair benign client performance. This paper proposes a practical, two-stage conditionally triggered anomaly detection mechanism: Stage I performs cross-round coarse-grained screening of anomalous behavior, while Stage II enables cross-client fine-grained identification of malicious participants. We innovatively integrate conditionally activated detection with zero-knowledge proofs (ZKPs) to achieve verifiable, server-trust-free defense. Additionally, we design a lightweight edge-adaptive scheme compatible with mainstream FL frameworks. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art approaches across diverse tasks and real-world edge devices. It achieves high detection accuracy without compromising benign model performance and incurs minimal verification overhead.
📝 Abstract
Federated Learning (FL) systems are vulnerable to adversarial attacks, such as model poisoning and backdoor attacks. However, existing defense mechanisms often fall short in real-world settings due to key limitations: they may rely on impractical assumptions, introduce distortions by modifying aggregation functions, or degrade model performance even in benign scenarios. To address these issues, we propose a novel anomaly detection method designed specifically for practical FL scenarios. Our approach employs a two-stage, conditionally activated detection mechanism: cross-round check first detects whether suspicious activity has occurred, and, if warranted, a cross-client check filters out malicious participants. This mechanism preserves utility while avoiding unrealistic assumptions. Moreover, to ensure the transparency and integrity of the defense mechanism, we incorporate zero-knowledge proofs, enabling clients to verify the detection without relying solely on the server's goodwill. To the best of our knowledge, this is the first method to bridge the gap between theoretical advances in FL security and the demands of real-world deployment. Extensive experiments across diverse tasks and real-world edge devices demonstrate the effectiveness of our method over state-of-the-art defenses.
Problem

Research questions and friction points this paper is trying to address.

Detects adversarial attacks in Federated Learning systems
Avoids impractical assumptions and model performance degradation
Ensures transparency with zero-knowledge proof verification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage conditionally activated anomaly detection
Zero-knowledge proof for verification integrity
Cross-round and cross-client checks for security
🔎 Similar Papers
No similar papers found.