Better Models and Algorithms for Learning Ising Models from Dynamics

📅 2025-07-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work studies efficient learning of the structure and parameters of Ising models under a more natural observation model: only configurations actually realized during Markov chain evolution are observed—contrary to the standard assumption of observing all update attempts, including rejected (no-change) proposals. For this weak-observation setting, we propose the first polynomial-time algorithm applicable to general reversible single-site update chains, including Metropolis–Hastings chains. Our method leverages robust statistical properties of reversible chains and integrates high-dimensional sparse graph recovery with parameter estimation techniques. For Ising models with maximum degree $d$, our algorithm exactly reconstructs the dependency graph in $mathsf{poly}(d) cdot n^2 log n$ time and estimates parameters in $widetilde{O}(2^d n)$ time, achieving accuracy comparable to optimal full-trajectory methods. This significantly enhances learnability and practicality in realistic dynamic settings.

Technology Category

Application Category

📝 Abstract
We study the problem of learning the structure and parameters of the Ising model, a fundamental model of high-dimensional data, when observing the evolution of an associated Markov chain. A recent line of work has studied the natural problem of learning when observing an evolution of the well-known Glauber dynamics [Bresler, Gamarnik, Shah, IEEE Trans. Inf. Theory 2018, Gaitonde, Mossel STOC 2024], which provides an arguably more realistic generative model than the classical i.i.d. setting. However, this prior work crucially assumes that all site update attempts are observed, emph{even when this attempt does not change the configuration}: this strong observation model is seemingly essential for these approaches. While perhaps possible in restrictive contexts, this precludes applicability to most realistic settings where we can observe emph{only} the stochastic evolution itself, a minimal and natural assumption for any process we might hope to learn from. However, designing algorithms that succeed in this more realistic setting has remained an open problem [Bresler, Gamarnik, Shah, IEEE Trans. Inf. Theory 2018, Gaitonde, Moitra, Mossel, STOC 2025]. In this work, we give the first algorithms that efficiently learn the Ising model in this much more natural observation model that only observes when the configuration changes. For Ising models with maximum degree $d$, our algorithm recovers the underlying dependency graph in time $mathsf{poly}(d)cdot n^2log n$ and then the actual parameters in additional $widetilde{O}(2^d n)$ time, which qualitatively matches the state-of-the-art even in the i.i.d. setting in a much weaker observation model. Our analysis holds more generally for a broader class of reversible, single-site Markov chains that also includes the popular Metropolis chain by leveraging more robust properties of reversible Markov chains.
Problem

Research questions and friction points this paper is trying to address.

Learning Ising models from realistic Markov chain observations
Overcoming limitations of prior work requiring full update visibility
Developing efficient algorithms for sparse configuration-change-only data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning Ising models from Markov chain dynamics
Observing only configuration changes, not all updates
Efficient algorithms for dependency graph recovery
🔎 Similar Papers
No similar papers found.