How Do Language Models Acquire Character-Level Information?

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how language models implicitly acquire character-level knowledge without explicit access to such information during training. Through a series of controlled experiments—including tokenizer substitution, manipulation of pretraining data, and analysis of internal model representations—the work systematically disentangles the sources of character-level knowledge into two categories: those tied to tokenization (e.g., merge rules and orthographic regularities) and those independent of tokenization (e.g., semantic associations among substrings and syntactic cues). This research provides the first clear delineation of the respective contributions of these factors to character-level knowledge acquisition, thereby revealing the underlying mechanisms by which language models implicitly encode character-level information.

Technology Category

Application Category

📝 Abstract
Language models (LMs) have been reported to implicitly encode character-level information, despite not being explicitly provided during training. However, the mechanisms underlying this phenomenon remain largely unexplored. To reveal the mechanisms, we analyze how models acquire character-level knowledge by comparing LMs trained under controlled settings, such as specifying the pre-training dataset or tokenizer, with those trained under standard settings. We categorize the contributing factors into those independent of tokenization. Our analysis reveals that merge rules and orthographic constraints constitute primary factors arising from tokenization, whereas semantic associations of substrings and syntactic information function as key factors independent of tokenization.
Problem

Research questions and friction points this paper is trying to address.

language models
character-level information
tokenization
mechanisms
implicit encoding
Innovation

Methods, ideas, or system contributions that make the work stand out.

character-level information
tokenization
merge rules
orthographic constraints
semantic associations
S
Soma Sato
Graduate School of Informatics, Nagoya University
Ryohei Sasano
Ryohei Sasano
Associate Professor at Nagoya University
Natural Language Processing