LangFIR: Discovering Sparse Language-Specific Features from Monolingual Data for Language Steering

πŸ“… 2026-04-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of effectively controlling multilingual output in large language models without relying on expensive parallel or multilingual data. The authors propose LangFIR, a novel method that, for the first time, identifies extremely sparse feature directions encoding language identity within sparse autoencoders (SAEs) using only minimal monolingual data and random token sequences. By introducing a random token filtering mechanism to eliminate language-agnostic components, LangFIR constructs highly efficient language steering vectors. Evaluated across three models, three datasets, and twelve languages, the approach significantly outperforms both parallel-data-dependent methods and the strongest monolingual baselines, achieving the highest average BLEU scores. These results reveal that language identity in multilingual models is encoded by highly sparse featuresβ€”a key mechanistic insight with implications for controllable generation.
πŸ“ Abstract
Large language models (LLMs) show strong multilingual capabilities, yet reliably controlling the language of their outputs remains difficult. Representation-level steering addresses this by adding language-specific vectors to model activations at inference time, but identifying language-specific directions in the residual stream often relies on multilingual or parallel data that can be expensive to obtain. Sparse autoencoders (SAEs) decompose residual activations into interpretable, sparse feature directions and offer a natural basis for this search, yet existing SAE-based approaches face the same data constraint. We introduce LangFIR (Language Feature Identification via Random-token Filtering), a method that discovers language-specific SAE features using only a small amount of monolingual data and random-token sequences. Many SAE features consistently activated by target-language inputs do not encode language identity. Random-token sequences surface these language-agnostic features, allowing LangFIR to filter them out and isolate a sparse set of language-specific features. We show that these features are extremely sparse, highly selective for their target language, and causally important: directional ablation increases cross-entropy loss only for the corresponding language. Using these features to construct steering vectors for multilingual generation control, LangFIR achieves the best average accuracy BLEU across three models (Gemma 3 1B, Gemma 3 4B, and Llama 3.1 8B), three datasets, and twelve target languages, outperforming the strongest monolingual baseline by up to and surpassing methods that rely on parallel data. Our results suggest that language identity in multilingual LLMs is localized in a sparse set of feature directions discoverable with monolingual data. Code is available at https://anonymous.4open.science/r/LangFIR-C0F5/.
Problem

Research questions and friction points this paper is trying to address.

language steering
sparse features
monolingual data
language-specific directions
multilingual LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

LangFIR
sparse autoencoders
language steering
monolingual data
feature disentanglement
πŸ”Ž Similar Papers
No similar papers found.
S
Sing Hieng Wong
Department of Computer Science, University of Kentucky, Kentucky, USA
Hassan Sajjad
Hassan Sajjad
Faculty of Computer Science, Dalhousie University
Deep LearningNLPInterpretabilityExplainableAIUnsupervised and semi-supervised methods
A
A. B. Siddique
Department of Computer Science, University of Kentucky, Kentucky, USA