🤖 AI Summary
This study investigates how language models implicitly acquire character-level knowledge without explicit access to such information during training. Through a series of controlled experiments—including tokenizer substitution, manipulation of pretraining data, and analysis of internal model representations—the work systematically disentangles the sources of character-level knowledge into two categories: those tied to tokenization (e.g., merge rules and orthographic regularities) and those independent of tokenization (e.g., semantic associations among substrings and syntactic cues). This research provides the first clear delineation of the respective contributions of these factors to character-level knowledge acquisition, thereby revealing the underlying mechanisms by which language models implicitly encode character-level information.
📝 Abstract
Language models (LMs) have been reported to implicitly encode character-level information, despite not being explicitly provided during training. However, the mechanisms underlying this phenomenon remain largely unexplored. To reveal the mechanisms, we analyze how models acquire character-level knowledge by comparing LMs trained under controlled settings, such as specifying the pre-training dataset or tokenizer, with those trained under standard settings. We categorize the contributing factors into those independent of tokenization. Our analysis reveals that merge rules and orthographic constraints constitute primary factors arising from tokenization, whereas semantic associations of substrings and syntactic information function as key factors independent of tokenization.