Scaling Test-time Compute for Low-resource Languages: Multilingual Reasoning in LLMs

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited deep reasoning capability of large language models (LLMs) in low-resource languages. We propose the English-as-Reasoning-Medium (ERM) paradigm: models accept inputs in a low-resource language, generate chain-of-thought (CoT) reasoning traces in English, and produce final answers in the target language. ERM is the first method to explicitly uncover and exploit an inherent language distribution bias within LLMs’ latent space—leveraging English as a cross-lingual reasoning medium to overcome the bottleneck of end-to-end CoT training for low-resource languages. Through multilingual input–mixed output supervised fine-tuning, test-time compute scaling, and cross-lingual latent-space analysis, our approach significantly outperforms monolingual CoT baselines across multiple low-resource language tasks, with gains up to 28.33%. Crucially, we establish, for the first time, a quantitative relationship between reasoning depth and the strength of language bias in the model’s latent space.

Technology Category

Application Category

📝 Abstract
Recent advances in test-time compute scaling have enabled Large Language Models (LLMs) to tackle deep reasoning tasks by generating a chain-of-thought (CoT) that includes trial and error, backtracking, and intermediate reasoning steps before producing the final answer. However, these techniques have been applied predominantly to popular languages, such as English, leaving reasoning in low-resource languages underexplored and misaligned. In this work, we investigate the multilingual mechanism by which LLMs internally operate in a latent space biased toward their inherently dominant language. To leverage this phenomenon for low-resource languages, we train models to generate the CoT in English while outputting the final response in the target language, given input in the low-resource language. Our experiments demonstrate that this approach, named English-Pivoted CoT Training, outperforms other baselines, including training to generate both the CoT and the final response solely in the target language, with up to 28.33% improvement. Further analysis provides novel insights into the relationships between reasoning and multilinguality of LLMs, prompting for better approaches in developing multilingual large reasoning models
Problem

Research questions and friction points this paper is trying to address.

Enhancing reasoning in low-resource languages via LLMs
Addressing bias toward dominant languages in multilingual reasoning
Improving CoT generation in English for target language output
Innovation

Methods, ideas, or system contributions that make the work stand out.

English-Pivoted CoT Training for low-resource languages
Generates CoT in English, final response in target language
Improves reasoning performance by up to 28.33%
🔎 Similar Papers
No similar papers found.