🤖 AI Summary
Addressing privacy leakage and quantum resource constraints in multi-client collaborative training for the quantum computing era. Method: We propose the first practical quantum federated learning (QFL) framework, integrating distributed quantum key distribution and quantum secret sharing to achieve information-theoretic security for model updates; designing a scalable quantum-classical hybrid architecture supporting parameterized quantum circuit training and classical federated optimization; and incorporating model compression to reduce communication overhead. Contributions/Results: We conduct the first empirical validation of QFL on a real 4-node quantum network; large-scale simulations with 200 clients demonstrate a 75% reduction in communication cost; accelerated convergence is observed on MNIST; and quantum clients achieve significantly improved classification accuracy on entangled states and noisy quantum data. This work establishes both theoretical foundations and a system-level implementation paradigm for privacy-enhanced distributed machine learning in quantum network environments.
📝 Abstract
Federated learning is essential for decentralized, privacy-preserving model training in the data-driven era. Quantum-enhanced federated learning leverages quantum resources to address privacy and scalability challenges, offering security and efficiency advantages beyond classical methods. However, practical and scalable frameworks addressing privacy concerns in the quantum computing era remain undeveloped. Here, we propose a practical quantum federated learning framework on quantum networks, utilizing distributed quantum secret keys to protect local model updates and enable secure aggregation with information-theoretic security. We experimentally validate our framework on a 4-client quantum network with a scalable structure. Extensive numerical experiments on both quantum and classical datasets show that adding a quantum client significantly enhances the trained global model's ability to classify multipartite entangled and non-stabilizer quantum datasets. Simulations further demonstrate scalability to 200 clients with classical models trained on the MNIST dataset, reducing communication costs by $75%$ through advanced model compression techniques and achieving rapid training convergence. Our work provides critical insights for building scalable, efficient, and quantum-secure machine learning systems for the coming quantum internet era.