AdaCoT: Rethinking Cross-Lingual Factual Reasoning through Adaptive Chain-of-Thought

📅 2025-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multilingual large language models exhibit cross-lingual performance disparities in factual reasoning due to imbalanced training data across languages. Method: This paper proposes an adaptive chain-of-thought framework centered on language-agnostic representations, which employs reinforcement learning to dynamically select the optimal “thinking language” as an intermediate medium for zero-shot cross-lingual multi-hop reasoning and routing—without additional pretraining and while preserving linguistic structures and cultural specificity. Contribution/Results: Evaluated on multiple cross-lingual factual reasoning benchmarks, the method improves average accuracy by 18.7% for low-resource languages, substantially narrowing the performance gap between high- and low-resource languages. It is the first approach to enable reward-driven, language-agnostic, and dynamically adaptable cross-lingual reasoning path selection.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown impressive multilingual capabilities through pretraining on diverse corpora. While these models show strong reasoning abilities, their performance varies significantly across languages due to uneven training data distribution. Existing approaches using machine translation, and extensive multilingual pretraining and cross-lingual tuning face scalability challenges and often fail to capture nuanced reasoning processes across languages. In this paper, we introduce AdaCoT (Adaptive Chain-of-Thought), a framework that enhances multilingual reasoning by dynamically routing thought processes through intermediary"thinking languages"before generating target-language responses. AdaCoT leverages a language-agnostic core and incorporates an adaptive, reward-based mechanism for selecting optimal reasoning pathways without requiring additional pretraining. Our comprehensive evaluation across multiple benchmarks demonstrates substantial improvements in both factual reasoning quality and cross-lingual consistency, with particularly strong performance gains in low-resource language settings. The results suggest that adaptive reasoning paths can effectively bridge the performance gap between high and low-resource languages while maintaining cultural and linguistic nuances.
Problem

Research questions and friction points this paper is trying to address.

Multilingual Language Models
Performance Inconsistency
Data Sparsity
Innovation

Methods, ideas, or system contributions that make the work stand out.

AdaCoT
Cross-lingual Reasoning
Adaptive Thinking Paths
🔎 Similar Papers
No similar papers found.
X
Xin Huang
Institute for Infocomm Research (I2R), A*STAR, Singapore
Tarun Kumar Vangani
Tarun Kumar Vangani
Lead Research Engineer
Machine TranslationDNN Training Optimization
Zhengyuan Liu
Zhengyuan Liu
Institute for Infocomm Research (I2R) - A*STAR; IEEE Senior Member.
Natural Language ProcessingArtificial IntelligenceHuman-Centered AI
B
Bowei Zou
Institute for Infocomm Research (I2R), A*STAR, Singapore
A
Ai Ti Aw
Institute for Infocomm Research (I2R), A*STAR, Singapore