Equilibrium Propagation for Non-Conservative Systems

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work extends equilibrium propagation from conservative systems to non-conservative systems with non-reciprocal interactions, enabling its application to a broader class of neural architectures such as feedforward networks while preserving exact gradient computation. The key innovation lies in introducing a dynamical correction during the learning phase that scales with the non-reciprocal coupling terms, allowing inference and learning to be carried out through steady-state synchronization. By formulating a variational principle in an augmented state space, the authors rigorously derive the exact gradient for equilibrium propagation in non-conservative settings—overcoming the traditional reliance on energy-based dynamics. Experiments on MNIST demonstrate that the proposed method achieves faster convergence and superior performance compared to prior approaches.

Technology Category

Application Category

📝 Abstract
Equilibrium Propagation (EP) is a physics-inspired learning algorithm that uses stationary states of a dynamical system both for inference and learning. In its original formulation it is limited to conservative systems, $\textit{i.e.}$ to dynamics which derive from an energy function. Given their importance in applications, it is important to extend EP to nonconservative systems, $\textit{i.e.}$ systems with non-reciprocal interactions. Previous attempts to generalize EP to such systems failed to compute the exact gradient of the cost function. Here we propose a framework that extends EP to arbitrary nonconservative systems, including feedforward networks. We keep the key property of equilibrium propagation, namely the use of stationary states both for inference and learning. However, we modify the dynamics in the learning phase by a term proportional to the non-reciprocal part of the interaction so as to obtain the exact gradient of the cost function. This algorithm can also be derived using a variational formulation that generates the learning dynamics through an energy function defined over an augmented state space. Numerical experiments using the MNIST database show that this algorithm achieves better performance and learns faster than previous proposals.
Problem

Research questions and friction points this paper is trying to address.

Equilibrium Propagation
non-conservative systems
non-reciprocal interactions
gradient computation
learning algorithm
Innovation

Methods, ideas, or system contributions that make the work stand out.

Equilibrium Propagation
non-conservative systems
non-reciprocal interactions
exact gradient
variational formulation
🔎 Similar Papers
No similar papers found.
A
Antonino Emanuele Scurria
Laboratoire d’Information Quantique (LIQ) CP224, Université libre de Bruxelles (ULB), Av. F. D. Roosevelt 50, 1050 Bruxelles, Belgium
D
Dimitri Vanden Abeele
Laboratoire d’Information Quantique (LIQ) CP224, Université libre de Bruxelles (ULB), Av. F. D. Roosevelt 50, 1050 Bruxelles, Belgium
B
Bortolo Matteo Mognetti
Interdisciplinary Center for Nonlinear Phenomena and Complex Systems CP231, Université libre de Bruxelles (ULB), Av. F. D. Roosevelt 50, 1050 Bruxelles, Belgium
Serge Massar
Serge Massar
Université libre de Bruxelles
quantum informationquantum opticsnon linear opticsreservoir computingquantum gravity