On Quantum Perceptron Learning via Quantum Search

📅 2025-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work corrects an error in Theorem 2 of arXiv:1602.04799 concerning the sampling probability of D-dimensional perfect separating hyperplanes under Gaussian data distributions, rigorously establishing the true scaling as Ω(γᴰ) and refuting the previously claimed γᴰ dependence. Method: Building on this correction, we first integrate Grover’s quantum search and quantum random walks into two classical linear programming paradigms—ellipsoid methods and cutting-plane random walks—to design a novel quantum perceptron training framework. Contribution/Results: The framework achieves dual quantum speedups: O(√N) in data size N and O(D¹·⁵) in dimension D, outperforming classical baselines of O(N) and O(D²). It solidifies the theoretical foundations of quantum perceptron learning and establishes the optimal lower bound on quantum sampling probability for high-dimensional classification.

Technology Category

Application Category

📝 Abstract
With the growing interest in quantum machine learning, the perceptron -- a fundamental building block in traditional machine learning -- has emerged as a valuable model for exploring quantum advantages. Two quantum perceptron algorithms based on Grover's search, were developed in arXiv:1602.04799 to accelerate training and improve statistical efficiency in perceptron learning. This paper points out and corrects a mistake in the proof of Theorem 2 in arXiv:1602.04799. Specifically, we show that the probability of sampling from a normal distribution for a $D$-dimensional hyperplane that perfectly classifies the data scales as $Omega(gamma^{D})$ instead of $Theta({gamma})$, where $gamma$ is the margin. We then revisit two well-established linear programming algorithms -- the ellipsoid method and the cutting plane random walk algorithm -- in the context of perceptron learning, and show how quantum search algorithms can be leveraged to enhance the overall complexity. Specifically, both algorithms gain a sub-linear speed-up $O(sqrt{N})$ in the number of data points $N$ as a result of Grover's algorithm and an additional $O(D^{1.5})$ speed-up is possible for cutting plane random walk algorithm employing quantum walk search.
Problem

Research questions and friction points this paper is trying to address.

Corrects error in quantum perceptron learning proof
Enhances perceptron learning via quantum search algorithms
Achieves sub-linear speed-up in data processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantum perceptron leverages Grover's search
Corrects error in normal distribution sampling
Enhances complexity with quantum walk search
🔎 Similar Papers
No similar papers found.
X
Xiaoyu Sun
Aix-Marseille Université, CNRS, LIS, 13288 Marseille, France
Mathieu Roget
Mathieu Roget
PhD Student, LIS, AMU
Computer sciencequantum computationmachine learninggame theorychaos theory
G
G. Molfetta
Aix-Marseille Université, CNRS, LIS, 13288 Marseille, France
H
Hachem Kadri Aix-Marseille Universit'e
Aix-Marseille Université, CNRS, LIS, 13288 Marseille, France