Deep Learning without Weight Symmetry

📅 2024-05-31
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
A key biological implausibility of backpropagation lies in its requirement for precise weight symmetry between forward and backward pathways. To address this, we propose Product Feedback Alignment (PFA), a novel learning algorithm that replaces the fixed random feedback matrix in classical Feedback Alignment with a dynamic, multiplicative feedback mechanism. PFA is the first method—both theoretically and empirically—to achieve high-fidelity approximation of standard backpropagation gradients while fully eliminating the weight symmetry constraint. The algorithm is natively compatible with convolutional neural networks (CNNs) and mainstream optimizers. On standard image recognition benchmarks—including ImageNet—it matches backpropagation’s accuracy, substantially outperforms classical Feedback Alignment, and exhibits superior convergence stability and generalization performance in deep architectures. By reconciling gradient-based learning with biologically plausible circuitry, PFA advances the frontier of biologically interpretable deep learning.

Technology Category

Application Category

📝 Abstract
Backpropagation (BP), a foundational algorithm for training artificial neural networks, predominates in contemporary deep learning. Although highly successful, it is often considered biologically implausible. A significant limitation arises from the need for precise symmetry between connections in the backward and forward pathways to backpropagate gradient signals accurately, which is not observed in biological brains. Researchers have proposed several algorithms to alleviate this symmetry constraint, such as feedback alignment and direct feedback alignment. However, their divergence from backpropagation dynamics presents challenges, particularly in deeper networks and convolutional layers. Here we introduce the Product Feedback Alignment (PFA) algorithm. Our findings demonstrate that PFA closely approximates BP and achieves comparable performance in deep convolutional networks while avoiding explicit weight symmetry. Our results offer a novel solution to the longstanding weight symmetry problem, leading to more biologically plausible learning in deep convolutional networks compared to earlier methods.
Problem

Research questions and friction points this paper is trying to address.

Addresses biological implausibility of backpropagation's weight symmetry in neural networks
Solves weight transport problem by aligning feedforward and feedback paths
Eliminates explicit weight symmetry while maintaining backpropagation-like performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Product Feedback Alignment eliminates weight symmetry
PFA closely approximates backpropagation performance
Algorithm enables biologically plausible deep learning
🔎 Similar Papers
No similar papers found.
J
Ji-An Li
Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA 92093
M
M. Benna
Department of Neurobiology, University of California, San Diego, La Jolla, CA 92093