🤖 AI Summary
This work investigates how neural language models acquire hierarchical syntactic structure from sequential data. Method: Using synthetic language data generated by a tractable Random Hierarchical Model (RHM), the authors systematically compare scaling behaviors of Transformers and convolutional networks on next-token prediction. Theoretical analysis is complemented by empirical evaluation. Contribution/Results: The study provides the first theoretical and empirical evidence that convolutional networks—due to their local receptive fields and weight sharing—are inherently better aligned with hierarchical generative processes, achieving faster error decay than globally attending Transformers. Their scaling exponent is significantly smaller, demonstrating a critical coupling between architectural inductive bias and the statistical properties of hierarchical data. This work establishes an architecture-dependent scaling theory for representation learning, offering a new paradigm for analyzing model inductive biases and structural generalization capabilities.
📝 Abstract
How do neural language models acquire a language's structure when trained for next-token prediction? We address this question by deriving theoretical scaling laws for neural network performance on synthetic datasets generated by the Random Hierarchy Model (RHM) -- an ensemble of probabilistic context-free grammars designed to capture the hierarchical structure of natural language while remaining analytically tractable. Previously, we developed a theory of representation learning based on data correlations that explains how deep learning models capture the hierarchical structure of the data sequentially, one layer at a time. Here, we extend our theoretical framework to account for architectural differences. In particular, we predict and empirically validate that convolutional networks, whose structure aligns with that of the generative process through locality and weight sharing, enjoy a faster scaling of performance compared to transformer models, which rely on global self-attention mechanisms. This finding clarifies the architectural biases underlying neural scaling laws and highlights how representation learning is shaped by the interaction between model architecture and the statistical properties of data.