Superposition unifies power-law training dynamics

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how feature stacking influences power-law dynamics in neural network training, with a focus on its impact on training speed and data dependence. Within a teacher-student framework, the authors combine analytical theory, input statistics, and channel importance analysis to demonstrate that, in the absence of stacking, training exhibits slow, data-dependent power-law decay. In contrast, introducing a stacking bottleneck induces a phase transition, leading to a universal 1/t convergence rate (power-law exponent ≈ 1) in training error. This accelerated regime is robust across diverse data distributions and channel configurations, revealing for the first time that feature stacking can eliminate data dependence and achieve up to tenfold acceleration in training.

Technology Category

Application Category

📝 Abstract
We investigate the role of feature superposition in the emergence of power-law training dynamics using a teacher-student framework. We first derive an analytic theory for training without superposition, establishing that the power-law training exponent depends on both the input data statistics and channel importance. Remarkably, we discover that a superposition bottleneck induces a transition to a universal power-law exponent of $\sim 1$, independent of data and channel statistics. This one over time training with superposition represents an up to tenfold acceleration compared to the purely sequential learning that takes place in the absence of superposition. Our finding that superposition leads to rapid training with a data-independent power law exponent may have important implications for a wide range of neural networks that employ superposition, including production-scale large language models.
Problem

Research questions and friction points this paper is trying to address.

superposition
power-law training dynamics
teacher-student framework
training acceleration
universal exponent
Innovation

Methods, ideas, or system contributions that make the work stand out.

feature superposition
power-law training dynamics
teacher-student framework
universal exponent
training acceleration
🔎 Similar Papers
No similar papers found.