Scale leads to compositional generalization

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Can neural networks achieve systematic generalization over discrete compositional structures using continuous, distributed representations? Method: We investigate how model scale and architecture influence this capability, employing a controlled training paradigm that ensures comprehensive coverage of the task space, combined with standard multilayer perceptrons (MLPs) and linear decoding analysis. Contribution/Results: We find that joint scaling of model size and dataset size substantially improves compositional generalization. Crucially, we demonstrate that compositional structure is linearly separable in hidden layers, enabling us to establish a universal approximation theory for multi-task encoding. Both theoretical analysis and empirical validation confirm that sufficient coverage of the task distribution is a necessary condition for reliable compositional generalization. Moreover, our framework successfully predicts patterns of compositional failure in large language–vision models—particularly in text-to-image generation—where incomplete task-space coverage impedes systematic composition.

Technology Category

Application Category

📝 Abstract
Can neural networks systematically capture discrete, compositional task structure despite their continuous, distributed nature? The impressive capabilities of large-scale neural networks suggest that the answer to this question is yes. However, even for the most capable models, there are still frequent failure cases that raise doubts about their compositionality. Here, we seek to understand what it takes for a standard neural network to generalize over tasks that share compositional structure. We find that simply scaling data and model size leads to compositional generalization. We show that this holds across different task encodings as long as the training distribution sufficiently covers the task space. In line with this finding, we prove that standard multilayer perceptrons can approximate a general class of compositional task families to arbitrary precision using only a linear number of neurons with respect to the number of task modules. Finally, we uncover that if networks successfully compositionally generalize, the constituents of a task can be linearly decoded from their hidden activations. We show that this metric correlates with failures of text-to-image generation models to compose known concepts.
Problem

Research questions and friction points this paper is trying to address.

Can neural networks generalize over compositional task structures?
Does scaling data and model size enable compositional generalization?
Can task constituents be decoded from hidden activations?
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scaling data and model size enhances generalization
Linear neurons approximate compositional task families
Hidden activations enable linear decoding of task constituents
🔎 Similar Papers
No similar papers found.
F
Florian Redhardt
ETH Zurich
Y
Yassir Akram
ETH Zurich
Simon Schug
Simon Schug
Princeton University
meta-learningcompositional generalization