Towards Scaling Deep Neural Networks with Predictive Coding: Theory and Practice

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Predictive coding (PC), a brain-inspired efficient alternative to backpropagation, has long suffered from training instability in deep networks and poorly understood dynamic mechanisms. This paper establishes a theoretical foundation for PC by interpreting its inference process as an approximate trust-region method that leverages higher-order information—thereby conferring robustness against vanishing gradients. Building on this insight, we propose μPC, a novel parameterization scheme that integrates iterative balancing to jointly update neural activities and perform local first-order gradient-based learning. Our approach enables, for the first time, stable training of PC networks exceeding 100 layers, achieving competitive performance on standard benchmarks. This work not only clarifies—through optimization theory—the intrinsic learning advantages of PC but also significantly enhances its scalability and practical applicability. By bridging theoretical understanding with empirical success, it provides critical support for low-power, biologically plausible deep learning paradigms.

Technology Category

Application Category

📝 Abstract
Backpropagation (BP) is the standard algorithm for training the deep neural networks that power modern artificial intelligence including large language models. However, BP is energy inefficient and unlikely to be implemented by the brain. This thesis studies an alternative, potentially more efficient brain-inspired algorithm called predictive coding (PC). Unlike BP, PC networks (PCNs) perform inference by iterative equilibration of neuron activities before learning or weight updates. Recent work has suggested that this iterative inference procedure provides a range of benefits over BP, such as faster training. However, these advantages have not been consistently observed, the inference and learning dynamics of PCNs are still poorly understood, and deep PCNs remain practically untrainable. Here, we make significant progress towards scaling PCNs by taking a theoretical approach grounded in optimisation theory. First, we show that the learning dynamics of PC can be understood as an approximate trust-region method using second-order information, despite explicitly using only first-order local updates. Second, going beyond this approximation, we show that PC can in principle make use of arbitrarily higher-order information, such that for feedforward networks the effective landscape on which PC learns is far more benign and robust to vanishing gradients than the (mean squared error) loss landscape. Third, motivated by a study of the inference dynamics of PCNs, we propose a new parameterisation called ``$μ$PC'', which for the first time allows stable training of 100+ layer networks with little tuning and competitive performance on simple tasks. Overall, this thesis significantly advances our fundamental understanding of the inference and learning dynamics of PCNs, while highlighting the need for future research to focus on hardware co-design if PC is to compete with BP at scale.
Problem

Research questions and friction points this paper is trying to address.

Scaling deep neural networks using brain-inspired predictive coding algorithms
Understanding inference and learning dynamics of predictive coding networks
Enabling stable training of deep predictive coding networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Predictive coding uses iterative neuron equilibration for inference
PC learning approximates trust-region methods with second-order information
μPC parameterization enables stable deep network training
🔎 Similar Papers
No similar papers found.