Provable Accuracy Bounds for Hybrid Dynamical Optimization and Sampling

📅 2024-10-08
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Hybrid analog/digital optimization and sampling algorithms lack non-asymptotic convergence guarantees and systematic hyperparameter design. Method: This paper addresses the controllable-accuracy challenge in large-neighborhood local search (LNLS) frameworks integrating analog dynamical accelerators (DX) with digital computation, proposing a block Langevin diffusion (BLD) modeling framework. Contribution/Results: BLD establishes the first non-asymptotic KL-divergence convergence bound for hybrid LNLS. By combining 2-Wasserstein distance analysis with stochastic/cyclic block selection strategies, we derive explicit quantitative relationships among device error, algorithmic hyperparameters, and sampling performance. We prove exponential convergence under ideal DX conditions and provide an upper bound on bias induced by device variability. The framework unifies accuracy, hardware constraints, and hyperparameters into a provably correct, tunable, and deployable design paradigm.

Technology Category

Application Category

📝 Abstract
Analog dynamical accelerators (DXs) are a growing sub-field in computer architecture research, offering order-of-magnitude gains in power efficiency and latency over traditional digital methods in several machine learning, optimization, and sampling tasks. However, limited-capacity accelerators require hybrid analog/digital algorithms to solve real-world problems, commonly using large-neighborhood local search (LNLS) frameworks. Unlike fully digital algorithms, hybrid LNLS has no non-asymptotic convergence guarantees and no principled hyperparameter selection schemes, particularly limiting cross-device training and inference. In this work, we provide non-asymptotic convergence guarantees for hybrid LNLS by reducing to block Langevin Diffusion (BLD) algorithms. Adapting tools from classical sampling theory, we prove exponential KL-divergence convergence for randomized and cyclic block selection strategies using ideal DXs. With finite device variation, we provide explicit bounds on the 2-Wasserstein bias in terms of step duration, noise strength, and function parameters. Our BLD model provides a key link between established theory and novel computing platforms, and our theoretical results provide a closed-form expression linking device variation, algorithm hyperparameters, and performance.
Problem

Research questions and friction points this paper is trying to address.

Hybrid analog/digital algorithms lack convergence guarantees
No principled hyperparameter selection for hybrid LNLS
Impact of device variation on performance unquantified
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid analog/digital algorithms for optimization
Block Langevin Diffusion convergence guarantees
Closed-form hyperparameter and performance linkage
🔎 Similar Papers
No similar papers found.
Matthew X. Burns
Matthew X. Burns
PhD Student, University of Rochester
ising machinescombinatorial optimizationhigh performance computingheterogeneous computinggpu
Q
Qingyuan Hou
Department of Electrical and Computer Engineering, University of Rochester
M
Michael C. Huang
Department of Electrical and Computer Engineering, University of Rochester