π€ AI Summary
To address the challenge of getting trapped in saddle points and poor local minima during non-convex optimization, this paper proposes the Quantum Langevin Dynamics (QLD) optimization framework. QLD is the first method to incorporate quantum fluctuation mechanisms into stochastic gradient updates, modeling the process via quantum stochastic differential equations, discretizing them using ItΓ΄ calculus, and parameterizing variational quantum circuits. Theoretically, QLD accelerates escape from saddle points and significantly increases the probability of converging to high-quality minima; an adaptive noise scheduling scheme further enhances robustness. Empirically, on benchmark combinatorial optimization and quantum machine learning tasks, QLD achieves a 2.3Γ speedup in convergence over SGD and Adam, while markedly improving solution quality.