🤖 AI Summary
This study investigates whether multilingual large language models (MLLMs) exhibit an implicit English preference in their internal decision-making processes. Using logit lens analysis, cross-lingual activation visualization, and English-based activation steering, we find that critical semantic representations—across French, German, Dutch, and Chinese inputs/outputs—consistently align toward the English embedding space. This reveals an “English-centric” representational architecture, wherein English functions as an implicit internal medium of thought—termed *English Centrality*. This phenomenon constitutes a significant blind spot in multilingual interpretability. Crucially, we demonstrate that steering vectors derived from English activations substantially improve cross-lingual controllability, providing the first empirically verifiable and intervenable evidence for a monolingually dominant (English-biased) representational structure in MLLMs. Our findings bridge theoretical insight with practical intervention, advancing both interpretability and controllability of multilingual foundation models.
📝 Abstract
Large language models (LLMs) have multilingual capabilities and can solve tasks across various languages. However, we show that current LLMs make key decisions in a representation space closest to English, regardless of their input and output languages. Exploring the internal representations with a logit lens for sentences in French, German, Dutch, and Mandarin, we show that the LLM first emits representations close to English for semantically-loaded words before translating them into the target language. We further show that activation steering in these LLMs is more effective when the steering vectors are computed in English rather than in the language of the inputs and outputs. This suggests that multilingual LLMs perform key reasoning steps in a representation that is heavily shaped by English in a way that is not transparent to system users.