QUACK: Quantum Aligned Centroid Kernel

📅 2024-05-01
🏛️ International Conference on Quantum Computing and Engineering
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing quantum kernel methods face two critical bottlenecks: (i) quadratic training complexity, *O*(*n*²), limiting scalability; and (ii) hardware constraints of Noisy Intermediate-Scale Quantum (NISQ) devices—limited qubit count, short coherence times, and high error rates—hindering processing of high-dimensional real-world data (e.g., 784-dimensional MNIST). To address these, we propose a class of linear-time quantum kernel methods. Our approach replaces the full-sample kernel matrix with class-wise centroids, reducing training complexity to *O*(*n*) and inference complexity to *O*(*c*), where *c* is the number of classes. We construct the kernel function via parameterized quantum circuits and jointly optimize centroids within a quantum-classical hybrid framework. Crucially, no feature dimensionality reduction is required. In simulation experiments, our method achieves classification accuracy on par with classical kernel methods for high-dimensional benchmarks such as MNIST. This work is the first to simultaneously preserve theoretical rigor while substantially enhancing NISQ compatibility and scalability.

Technology Category

Application Category

📝 Abstract
Quantum computing (QC) seems to show potential for application in machine learning (ML). In particular quantum kernel methods (QKM) exhibit promising properties for use in supervised ML tasks. However, a major disadvantage of kernel methods is their unfavorable quadratic scaling with the number of training samples. Together with the limits imposed by currently available quantum hardware (NISQ devices) with their low qubit coherence times, small number of qubits, and high error rates, the use of QC in ML at an industrially relevant scale is currently impossible. As a small step in improving the potential applications of QKMs, we introduce QUACK, a quantum kernel algorithm whose time complexity scales linear with the number of samples during training, and independent of the number of training samples in the inference stage. In the training process, only the kernel entries for the samples and the centers of the classes are calculated, i.e. the maximum shape of the kernel for n samples and c classes is (n, c). During training, the parameters of the quantum kernel and the positions of the centroids are optimized iteratively. In the inference stage, for every new sample the circuit is only evaluated for every centroid, i.e. c times. We show that the QUACK algorithm nevertheless provides satisfactory results and can perform at a similar level as classical kernel methods with quadratic scaling during training. In addition, our (simulated) algorithm is able to handle high-dimensional datasets such as MNIST with 784 features without any dimensionality reduction.
Problem

Research questions and friction points this paper is trying to address.

Quantum Computing
Supervised Learning
NISQ Limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

QUACK
quantum kernel methods
linear scalability
🔎 Similar Papers
Kilian Tscharke
Kilian Tscharke
Fraunhofer Institute for Applied and Integrated Security
S
Sebastian Issel
Quantum Security Technologies, Fraunhofer Institute for Applied and Integrated Security, Garching near Munich, Germany
P
P. Debus
Quantum Security Technologies, Fraunhofer Institute for Applied and Integrated Security, Garching near Munich, Germany