Quantum Algorithms for the Pathwise Lasso

📅 2023-12-21
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the computational bottleneck in computing the Lasso regularization path for high-dimensional linear regression. We propose the first quantum path-following Lasso algorithm, built upon the classical Least Angle Regression (LARS) framework to efficiently generate the full regularization path. Methodologically, we introduce the first quantum implementation of LARS, integrating quantum minimum finding (Dürr–Høyer and Chen–de Wolf variants), approximate KKT condition analysis, duality gap theory, and dequantization techniques. Theoretical contributions include: (i) quantum speedups of $O(sqrt{d})$ and $O(sqrt{n})$ in feature dimension $d$ and sample size $n$, respectively; (ii) a total complexity of $mathrm{polylog}(n)$ for Gaussian design matrices—exponentially faster than classical LARS; (iii) robustness guarantees under approximate quantum computation; (iv) tight lower bounds for quantum and classical Lasso path queries; and (v) an efficient dequantized variant preserving quantum advantage.
📝 Abstract
We present a novel quantum high-dimensional linear regression algorithm with an $ell_1$-penalty based on the classical LARS (Least Angle Regression) pathwise algorithm. Similarly to available classical algorithms for Lasso, our quantum algorithm provides the full regularisation path as the penalty term varies, but quadratically faster per iteration under specific conditions. A quadratic speedup on the number of features $d$ is possible by using the simple quantum minimum-finding subroutine from D""urr and Hoyer (arXiv'96) in order to obtain the joining time at each iteration. We then improve upon this simple quantum algorithm and obtain a quadratic speedup both in the number of features $d$ and the number of observations $n$ by using the approximate quantum minimum-finding subroutine from Chen and de Wolf (ICALP'23). In order to do so, we approximately compute the joining times to be searched over by the approximate quantum minimum-finding subroutine. As another main contribution, we prove, via an approximate version of the KKT conditions and a duality gap, that the LARS algorithm (and therefore our quantum algorithm) is robust to errors. This means that it still outputs a path that minimises the Lasso cost function up to a small error if the joining times are only approximately computed. Furthermore, we show that, when the observations are sampled from a Gaussian distribution, our quantum algorithm's complexity only depends polylogarithmically on $n$, exponentially better than the classical LARS algorithm, while keeping the quadratic improvement on $d$. Moreover, we propose a dequantised version of our quantum algorithm that also retains the polylogarithmic dependence on $n$, albeit presenting the linear scaling on $d$ from the standard LARS algorithm. Finally, we prove query lower bounds for classical and quantum Lasso algorithms.
Problem

Research questions and friction points this paper is trying to address.

Develops quantum algorithm for high-dimensional linear regression with L1-penalty.
Achieves quadratic speedup in features and observations under specific conditions.
Proves robustness of LARS algorithm to errors in joining times computation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantum algorithm for Lasso with quadratic speedup.
Approximate quantum minimum-finding for improved efficiency.
Robustness to errors via approximate KKT conditions.
🔎 Similar Papers
No similar papers found.
J
J. F. Doriguello
HUN-REN Alfréd Rényi Institute of Mathematics, Budapest, Hungary; Centre for Quantum Technologies, National University of Singapore, Singapore
Debbie Lim
Debbie Lim
University of Latvia
Quantum algorithmsquantum online learning algorithms
Chi Seng Pun
Chi Seng Pun
School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore
P
P. Rebentrost
Centre for Quantum Technologies, National University of Singapore, Singapore; Department of Computer Science, National University of Singapore, Singapore
T
Tushar Vaidya
School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore