🤖 AI Summary
Hybrid analog/digital optimization and sampling algorithms lack non-asymptotic convergence guarantees and systematic hyperparameter design. Method: This paper addresses the controllable-accuracy challenge in large-neighborhood local search (LNLS) frameworks integrating analog dynamical accelerators (DX) with digital computation, proposing a block Langevin diffusion (BLD) modeling framework. Contribution/Results: BLD establishes the first non-asymptotic KL-divergence convergence bound for hybrid LNLS. By combining 2-Wasserstein distance analysis with stochastic/cyclic block selection strategies, we derive explicit quantitative relationships among device error, algorithmic hyperparameters, and sampling performance. We prove exponential convergence under ideal DX conditions and provide an upper bound on bias induced by device variability. The framework unifies accuracy, hardware constraints, and hyperparameters into a provably correct, tunable, and deployable design paradigm.
📝 Abstract
Analog dynamical accelerators (DXs) are a growing sub-field in computer architecture research, offering order-of-magnitude gains in power efficiency and latency over traditional digital methods in several machine learning, optimization, and sampling tasks. However, limited-capacity accelerators require hybrid analog/digital algorithms to solve real-world problems, commonly using large-neighborhood local search (LNLS) frameworks. Unlike fully digital algorithms, hybrid LNLS has no non-asymptotic convergence guarantees and no principled hyperparameter selection schemes, particularly limiting cross-device training and inference. In this work, we provide non-asymptotic convergence guarantees for hybrid LNLS by reducing to block Langevin Diffusion (BLD) algorithms. Adapting tools from classical sampling theory, we prove exponential KL-divergence convergence for randomized and cyclic block selection strategies using ideal DXs. With finite device variation, we provide explicit bounds on the 2-Wasserstein bias in terms of step duration, noise strength, and function parameters. Our BLD model provides a key link between established theory and novel computing platforms, and our theoretical results provide a closed-form expression linking device variation, algorithm hyperparameters, and performance.