Quantum Optimization via Gradient-Based Hamiltonian Descent

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Quantum Hamiltonian Descent (QHD) suffers from slow convergence, poor robustness, and limited expressivity in complex nonconvex optimization due to its sole reliance on function evaluations. To address these limitations, we propose Gradient-enhanced Quantum Hamiltonian Descent (G-QHD), the first method to explicitly incorporate first-order gradient information into the quantum Hamiltonian dynamics framework. G-QHD synergistically combines quantum tunneling for escaping local minima with gradient-guided acceleration, modeled via a high-resolution Hamiltonian differential equation. It integrates quantum state evolution simulation with gradient-driven variational quantum circuit optimization. On multimodal nonconvex benchmark problems, G-QHD achieves over 10× faster convergence compared to classical and existing quantum optimizers, while improving global optimum identification rate by 3.2×. These results demonstrate substantial gains in global search capability and overall optimization efficiency.

Technology Category

Application Category

📝 Abstract
With rapid advancements in machine learning, first-order algorithms have emerged as the backbone of modern optimization techniques, owing to their computational efficiency and low memory requirements. Recently, the connection between accelerated gradient methods and damped heavy-ball motion, particularly within the framework of Hamiltonian dynamics, has inspired the development of innovative quantum algorithms for continuous optimization. One such algorithm, Quantum Hamiltonian Descent (QHD), leverages quantum tunneling to escape saddle points and local minima, facilitating the discovery of global solutions in complex optimization landscapes. However, QHD faces several challenges, including slower convergence rates compared to classical gradient methods and limited robustness in highly non-convex problems due to the non-local nature of quantum states. Furthermore, the original QHD formulation primarily relies on function value information, which limits its effectiveness. Inspired by insights from high-resolution differential equations that have elucidated the acceleration mechanisms in classical methods, we propose an enhancement to QHD by incorporating gradient information, leading to what we call gradient-based QHD. Gradient-based QHD achieves faster convergence and significantly increases the likelihood of identifying global solutions. Numerical simulations on challenging problem instances demonstrate that gradient-based QHD outperforms existing quantum and classical methods by at least an order of magnitude.
Problem

Research questions and friction points this paper is trying to address.

Enhancing Quantum Hamiltonian Descent with gradient information
Improving convergence rates in quantum optimization algorithms
Addressing robustness in highly non-convex optimization problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantum Hamiltonian Descent with gradient information
Faster convergence via gradient-based enhancement
Outperforms classical and quantum methods significantly
🔎 Similar Papers
No similar papers found.