🤖 AI Summary
The mechanisms by which local learning dynamics in biological and artificial neural networks collectively solve global tasks remain poorly understood; moreover, there is a lack of a unified, interpretable, and task-agnostic framework for modeling local learning objectives. Method: Grounded in information-theoretic first principles, we derive neuron-level local objective functions; introduce the novel concept of “infomorphic” networks to unify diverse learning rules; and integrate mutual information and conditional entropy modeling, differentiable architecture design, and parameterized local objectives to enable task-driven automatic discovery of learning rules. Contribution/Results: Our approach achieves state-of-the-art performance across multiple benchmark tasks while substantially enhancing model transparency and neuronal functional interpretability—thereby establishing an explanatory bridge between theoretical neuroscience and AI regarding local learning mechanisms.
📝 Abstract
Significance Which learning goals must individual computational elements pursue to contribute to a network-level task solution? This local understanding is missing in both biological, but also artificial neural networks, despite their impressive performance. We address this question by characterizing the information processing motifs of individual neurons as local goal functions, derived from first principles of information theory. A simple parameterization then enables the definition of an abstract goal function that spans a broad space of different learning rules and tasks. The resulting “infomorphic” networks offer a constructive approach to understanding local learning and information processing in neural networks, creating a bridge between theoretical neuroscience and artificial intelligence.