🤖 AI Summary
This study investigates whether large language models (LLMs) possess human-like innate linguistic preferences—specifically, the capacity to distinguish learnable (natural) from unlearnable (unnatural) languages. Method: We systematically generate unnatural languages via perturbation functions that disrupt syntactic and statistical properties characteristic of human languages; we then compare GPT-2’s learning trajectories on natural versus unnatural languages using perplexity dynamics and cross-linguistic variation analysis. Contribution/Results: GPT-2 exhibits no statistically significant difference in learning difficulty between natural and unnatural languages, failing to replicate the typological sensitivity observed in human language acquisition. This constitutes the first controlled empirical demonstration that current LLMs lack the innate constraint mechanisms hypothesized to underlie human linguistic universals. The findings challenge the prevailing assumption that LLMs implicitly encode human-like linguistic biases, providing critical evidence for a fundamental divergence between LLMs and human language cognition.
📝 Abstract
Are large language models (LLMs) sensitive to the distinction between humanly possible languages and humanly impossible languages? This question is taken by many to bear on whether LLMs and humans share the same innate learning biases. Previous work has attempted to answer it in the positive by comparing LLM learning curves on existing language datasets and on "impossible" datasets derived from them via various perturbation functions. Using the same methodology, we examine this claim on a wider set of languages and impossible perturbations. We find that in most cases, GPT-2 learns each language and its impossible counterpart equally easily, in contrast to previous claims. We also apply a more lenient condition by testing whether GPT-2 provides any kind of separation between the whole set of natural languages and the whole set of impossible languages. By considering cross-linguistic variance in various metrics computed on the perplexity curves, we show that GPT-2 provides no systematic separation between the possible and the impossible. Taken together, these perspectives show that LLMs do not share the human innate biases that shape linguistic typology.