When Less Language is More: Language-Reasoning Disentanglement Makes LLMs Better Multilingual Reasoners

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit significant performance disparities in multilingual reasoning, excelling on high-resource languages while lagging markedly on low-resource ones. To address this, we propose the “language–reasoning disentanglement” hypothesis—the first empirical validation that language-specific and reasoning-related representations are separable within LLMs’ internal representation space. Inspired by cognitive neuroscience, we design a training-free, inference-time causal intervention: hierarchical feature ablation to suppress language-specific representations while preserving a shared reasoning subspace. Extensive experiments across 11 language families and 10 open-source LLMs demonstrate that this lightweight intervention consistently improves multilingual reasoning—matching or surpassing supervised fine-tuning and reinforcement learning—while maintaining top-layer linguistic fidelity. The method is computationally efficient, interpretable, and requires no additional parameters or training.

Technology Category

Application Category

📝 Abstract
Multilingual reasoning remains a significant challenge for large language models (LLMs), with performance disproportionately favoring high-resource languages. Drawing inspiration from cognitive neuroscience, which suggests that human reasoning functions largely independently of language processing, we hypothesize that LLMs similarly encode reasoning and language as separable components that can be disentangled to enhance multilingual reasoning. To evaluate this, we perform a causal intervention by ablating language-specific representations at inference time. Experiments on 10 open-source LLMs spanning 11 typologically diverse languages show that this language-specific ablation consistently boosts multilingual reasoning performance. Layer-wise analyses further confirm that language and reasoning representations can be effectively decoupled throughout the model, yielding improved multilingual reasoning capabilities, while preserving top-layer language features remains essential for maintaining linguistic fidelity. Compared to post-training such as supervised fine-tuning or reinforcement learning, our training-free ablation achieves comparable or superior results with minimal computational overhead. These findings shed light on the internal mechanisms underlying multilingual reasoning in LLMs and suggest a lightweight and interpretable strategy for improving cross-lingual generalization.
Problem

Research questions and friction points this paper is trying to address.

Improving multilingual reasoning in LLMs by disentangling language and reasoning
Enhancing cross-lingual generalization without extensive post-training
Preserving linguistic fidelity while boosting reasoning performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentangle language and reasoning via ablation
Layer-wise decoupling boosts multilingual reasoning
Training-free method enhances cross-lingual generalization
🔎 Similar Papers
No similar papers found.