🤖 AI Summary
Diffusion probabilistic models (DPMs) incur prohibitively high training costs—computationally, energetically, and in terms of hardware requirements—when generating high-resolution images and audio.
Method: This paper proposes the first DPM acceleration framework integrating quantum Carleman linearization. It introduces two quantum-classical hybrid solvers—DPM-solver-k and UniPC—that synergistically leverage quantum ordinary differential equation (ODE) solvers, quantum linear systems algorithms (QLSAs), and linear combinations of Hamiltonians (LCH) simulation to efficiently approximate DPM training dynamics.
Contribution/Results: Theoretical analysis shows that our approach reduces the computational complexity of key steps from classical polynomial to quasi-logarithmic scaling, substantially lowering energy consumption and hardware demands. Empirical evaluation demonstrates strong scalability on large-scale generative tasks. This work establishes a novel paradigm for deploying quantum machine learning in practical, production-grade generative AI systems.
📝 Abstract
A diffusion probabilistic model (DPM) is a generative model renowned for its ability to produce high-quality outputs in tasks such as image and audio generation. However, training DPMs on large, high-dimensional datasets such as high-resolution images or audio incurs significant computational, energy, and hardware costs. In this work, we introduce efficient quantum algorithms for implementing DPMs through various quantum ODE solvers. These algorithms highlight the potential of quantum Carleman linearization for diverse mathematical structures, leveraging state-of-the-art quantum linear system solvers (QLSS) or linear combination of Hamiltonian simulations (LCHS). Specifically, we focus on two approaches: DPM-solver-$k$ which employs exact $k$-th order derivatives to compute a polynomial approximation of $epsilon_ heta(x_lambda,lambda)$; and UniPC which uses finite difference of $epsilon_ heta(x_lambda,lambda)$ at different points $(x_{s_m}, lambda_{s_m})$ to approximate higher-order derivatives. As such, this work represents one of the most direct and pragmatic applications of quantum algorithms to large-scale machine learning models, presumably talking substantial steps towards demonstrating the practical utility of quantum computing.