Lagrangian-based Equilibrium Propagation: generalisation to arbitrary boundary conditions&equivalence with Hamiltonian Echo Learning

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional energy-based models (EBMs) and equilibrium propagation (EP) lack principled frameworks for learning dynamic trajectories under time-varying inputs, and existing variants fail to satisfy hardware-friendly constraints—namely, forward-only computation, constant iteration count, and local implementability—while preserving variational consistency across transient dynamics. Method: We generalize EP to dynamic EBMs via the Generalized Lagrangian Equilibrium Propagation (GLEP) framework, grounded in a trajectory-level variational principle that extends the generalized Lagrangian formalism to the full system evolution—not just steady states. We rigorously analyze boundary-condition effects on gradient estimation and derive necessary and sufficient conditions for hardware compatibility. Results: We prove that Hamiltonian Echo Learning (HEL) is the unique GLEP instance satisfying all three hardware constraints. Furthermore, we establish the first formal equivalence between GLEP and HEL, yielding a biologically plausible yet engineering-practical dynamic EP learning paradigm for spiking and analog neuromorphic hardware.

Technology Category

Application Category

📝 Abstract
Equilibrium Propagation (EP) is a learning algorithm for training Energy-based Models (EBMs) on static inputs which leverages the variational description of their fixed points. Extending EP to time-varying inputs is a challenging problem, as the variational description must apply to the entire system trajectory rather than just fixed points, and careful consideration of boundary conditions becomes essential. In this work, we present Generalized Lagrangian Equilibrium Propagation (GLEP), which extends the variational formulation of EP to time-varying inputs. We demonstrate that GLEP yields different learning algorithms depending on the boundary conditions of the system, many of which are impractical for implementation. We then show that Hamiltonian Echo Learning (HEL) -- which includes the recently proposed Recurrent HEL (RHEL) and the earlier known Hamiltonian Echo Backpropagation (HEB) algorithms -- can be derived as a special case of GLEP. Notably, HEL is the only instance of GLEP we found that inherits the properties that make EP a desirable alternative to backpropagation for hardware implementations: it operates in a"forward-only"manner (i.e. using the same system for both inference and learning), it scales efficiently (requiring only two or more passes through the system regardless of model size), and enables local learning.
Problem

Research questions and friction points this paper is trying to address.

Extends Equilibrium Propagation to time-varying inputs
Analyzes boundary conditions for practical learning algorithms
Links Hamiltonian Echo Learning to generalized EP framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends EP to time-varying inputs via GLEP
Links HEL to GLEP as a special case
Enables forward-only efficient local learning
🔎 Similar Papers
No similar papers found.