RobQFL: Robust Quantum Federated Learning in Adversarial Environment

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The noise resilience of Quantum Federated Learning (QFL) under adversarial settings remains poorly understood. Method: This paper presents the first systematic analysis of QFL’s adversarial vulnerability and proposes RobQFL—a robust QFL framework integrating adversarial training into the federated learning loop. It introduces a tunable-axis parameterized perturbation strategy, client-coverage control, an ε-mixed perturbation scheduling scheme, and a choice between fine-tuning or from-scratch optimization. Contribution/Results: Two novel evaluation metrics—Accuracy-Robustness Area and Robustness Volume—are proposed to quantify trade-offs between clean accuracy and adversarial robustness. Experiments on MNIST and Fashion-MNIST demonstrate that involving only 20%–50% of clients in adversarial training improves adversarial accuracy by approximately 15 percentage points, while degrading clean accuracy by less than 2 percentage points—achieving a favorable balance between robustness and generalization.

Technology Category

Application Category

📝 Abstract
Quantum Federated Learning (QFL) merges privacy-preserving federation with quantum computing gains, yet its resilience to adversarial noise is unknown. We first show that QFL is as fragile as centralized quantum learning. We propose Robust Quantum Federated Learning (RobQFL), embedding adversarial training directly into the federated loop. RobQFL exposes tunable axes: client coverage $γ$ (0-100%), perturbation scheduling (fixed-$varepsilon$ vs $varepsilon$-mixes), and optimization (fine-tune vs scratch), and distils the resulting $γ imes varepsilon$ surface into two metrics: Accuracy-Robustness Area and Robustness Volume. On 15-client simulations with MNIST and Fashion-MNIST, IID and Non-IID conditions, training only 20-50% clients adversarially boosts $varepsilon leq 0.1$ accuracy $sim$15 pp at $< 2$ pp clean-accuracy cost; fine-tuning adds 3-5 pp. With $geq$75% coverage, a moderate $varepsilon$-mix is optimal, while high-$varepsilon$ schedules help only at 100% coverage. Label-sorted non-IID splits halve robustness, underscoring data heterogeneity as a dominant risk.
Problem

Research questions and friction points this paper is trying to address.

Robustness of Quantum Federated Learning to adversarial noise
Improving accuracy under adversarial attacks with tunable parameters
Addressing data heterogeneity and client coverage in QFL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Embedding adversarial training into federated loop
Tunable client coverage and perturbation scheduling
Optimization via fine-tuning or scratch training
🔎 Similar Papers
No similar papers found.