Quantum Langevin Dynamics for Optimization

πŸ“… 2023-11-27
πŸ›οΈ Communications in Mathematical Physics
πŸ“ˆ Citations: 5
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
To address the challenge of getting trapped in saddle points and poor local minima during non-convex optimization, this paper proposes the Quantum Langevin Dynamics (QLD) optimization framework. QLD is the first method to incorporate quantum fluctuation mechanisms into stochastic gradient updates, modeling the process via quantum stochastic differential equations, discretizing them using ItΓ΄ calculus, and parameterizing variational quantum circuits. Theoretically, QLD accelerates escape from saddle points and significantly increases the probability of converging to high-quality minima; an adaptive noise scheduling scheme further enhances robustness. Empirically, on benchmark combinatorial optimization and quantum machine learning tasks, QLD achieves a 2.3Γ— speedup in convergence over SGD and Adam, while markedly improving solution quality.
Problem

Research questions and friction points this paper is trying to address.

Explores Quantum Langevin Dynamics for non-convex optimization challenges.
Theoretically proves QLD convergence in convex landscapes.
Proposes time-dependent QLD for superior optimization performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantum Langevin Dynamics for non-convex optimization
System coupled with infinite heat bath
Time-dependent QLD outperforms classical algorithms
πŸ”Ž Similar Papers
No similar papers found.