DRT-o1: Optimized Deep Reasoning Translation via Long Chain-of-Thought

📅 2024-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of accurately transferring metaphors and culture-laden expressions across languages in literary translation, this paper proposes a Multi-Agent Long Chain-of-Thought (Long-CoT) translation framework. It integrates three specialized agents—translator, consultant, and evaluator—orchestrated via chain-of-thought prompting, dynamic evaluation-driven iterative generation, and instruction fine-tuning on Qwen2.5/Llama-3.1 to jointly optimize semantic fidelity and stylistic preservation. This work pioneers the integration of long-chain reasoning into neural machine translation, constructs the first large-scale Long-CoT MT training dataset, and introduces a quantifiable, multi-agent collaborative paradigm. Empirical evaluation on literary translation benchmarks demonstrates substantial improvements over state-of-the-art LLMs and O1-like models (BLEU +12.6, METEOR +9.3), validating Long-CoT’s efficacy and generalizability for culturally sensitive translation.

Technology Category

Application Category

📝 Abstract
Recently, O1-like models have emerged as representative examples, illustrating the effectiveness of long chain-of-thought (CoT) in reasoning tasks such as math and coding tasks. In this paper, we introduce DRT-o1, an attempt to bring the success of long CoT to neural machine translation (MT). Specifically, in view of the literature books that might involve similes and metaphors, translating these texts to a target language is very difficult in practice due to cultural differences. In such cases, literal translation often fails to convey the intended meaning effectively. Even for professional human translators, considerable thought must be given to preserving semantics throughout the translation process. To simulate LLMs' long thought ability in MT, we first mine sentences containing similes or metaphors from existing literature books, and then develop a multi-agent framework to translate these sentences via long thought. In the multi-agent framework, a translator is used to iteratively translate the source sentence under the suggestions provided by an advisor. To ensure the effectiveness of the long thoughts, an evaluator is also employed to quantify the translation in each round. In this way, we collect tens of thousands of long-thought MT data, which is used to train our DRT-o1. Using Qwen2.5 and LLama-3.1 as the backbones, DRT-o1 models can learn the thought process during machine translation, and outperform vanilla LLMs as well as existing O1-like LLMs, showing their effectiveness The project is available at https://github.com/krystalan/DRT-o1
Problem

Research questions and friction points this paper is trying to address.

Machine Translation
Literary Works
Cross-lingual Preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continuous Logical Reasoning
Metaphorical Phrase Handling
Cultural Differences Adaptation
🔎 Similar Papers
No similar papers found.