🤖 AI Summary
Can neural networks achieve systematic generalization over discrete compositional structures using continuous, distributed representations?
Method: We investigate how model scale and architecture influence this capability, employing a controlled training paradigm that ensures comprehensive coverage of the task space, combined with standard multilayer perceptrons (MLPs) and linear decoding analysis.
Contribution/Results: We find that joint scaling of model size and dataset size substantially improves compositional generalization. Crucially, we demonstrate that compositional structure is linearly separable in hidden layers, enabling us to establish a universal approximation theory for multi-task encoding. Both theoretical analysis and empirical validation confirm that sufficient coverage of the task distribution is a necessary condition for reliable compositional generalization. Moreover, our framework successfully predicts patterns of compositional failure in large language–vision models—particularly in text-to-image generation—where incomplete task-space coverage impedes systematic composition.
📝 Abstract
Can neural networks systematically capture discrete, compositional task structure despite their continuous, distributed nature? The impressive capabilities of large-scale neural networks suggest that the answer to this question is yes. However, even for the most capable models, there are still frequent failure cases that raise doubts about their compositionality. Here, we seek to understand what it takes for a standard neural network to generalize over tasks that share compositional structure. We find that simply scaling data and model size leads to compositional generalization. We show that this holds across different task encodings as long as the training distribution sufficiently covers the task space. In line with this finding, we prove that standard multilayer perceptrons can approximate a general class of compositional task families to arbitrary precision using only a linear number of neurons with respect to the number of task modules. Finally, we uncover that if networks successfully compositionally generalize, the constituents of a task can be linearly decoded from their hidden activations. We show that this metric correlates with failures of text-to-image generation models to compose known concepts.