🤖 AI Summary
Quantum federated learning (QFL) faces dual privacy challenges—protecting both raw training data and model parameters across distributed quantum devices. Method: This paper proposes the first multi-protocol privacy framework integrating singular value decomposition (SVD), quantum key distribution (QKD), and analytical quantum gradient descent (AQGD). It enables secure local data preprocessing via SVD-based dimensionality reduction and sanitization, end-to-end encrypted model transmission leveraging QKD-generated keys, and efficient quantum training optimization using AQGD. Contribution/Results: Theoretical analysis and experiments on real quantum hardware (e.g., IBM Q) demonstrate that the framework achieves ≥96.2% test accuracy on benchmark datasets (e.g., MNIST) under strict privacy guarantees, while attaining convergence rates comparable to classical federated learning. It thus uniquely balances security, practicality, and scalability in QFL.
📝 Abstract
Quantum Federated Learning (QFL) promises to revolutionize distributed machine learning by combining the computational power of quantum devices with collaborative model training. Yet, privacy of both data and models remains a critical challenge. In this work, we propose a privacy-preserving QFL framework where a network of $n$ quantum devices trains local models and transmits them to a central server under a multi-layered privacy protocol. Our design leverages Singular Value Decomposition (SVD), Quantum Key Distribution (QKD), and Analytic Quantum Gradient Descent (AQGD) to secure data preparation, model sharing, and training stages. Through theoretical analysis and experiments on contemporary quantum platforms and datasets, we demonstrate that the framework robustly safeguards data and model confidentiality while maintaining training efficiency.