🤖 AI Summary
This paper addresses the fragmentation between sentence-level and document-level modeling in document-level machine translation, which leads to inconsistent coherence and terminology due to heterogeneous intermediate translation quality. To tackle this, we propose a Dual Intermediate Translation Collaborative Refinement framework. It is the first to jointly leverage sentence-to-sentence (Sent2Sent) and document-to-document (Doc2Doc) intermediate translations to guide large language model (LLM) fine-tuning. We further introduce a quality-aware adaptive loss weighting mechanism that dynamically emphasizes hard-to-translate samples. Supervised fine-tuning is conducted on LLaMA-3-8B-Instruct and Mistral-Nemo-Instruct, augmented by a dedicated translation quality assessment module for granular optimization. Evaluated across ten cross-lingual document translation tasks, our method significantly outperforms single-level refinement baselines, achieving substantial gains in document coherence and terminology consistency—demonstrating both the efficacy and generalizability of collaborative refinement.
📝 Abstract
Recent research has shown that large language models (LLMs) can enhance translation quality through self-refinement. In this paper, we build on this idea by extending the refinement from sentence-level to document-level translation, specifically focusing on document-to-document (Doc2Doc) translation refinement. Since sentence-to-sentence (Sent2Sent) and Doc2Doc translation address different aspects of the translation process, we propose fine-tuning LLMs for translation refinement using two intermediate translations, combining the strengths of both Sent2Sent and Doc2Doc. Additionally, recognizing that the quality of intermediate translations varies, we introduce an enhanced fine-tuning method with quality awareness that assigns lower weights to easier translations and higher weights to more difficult ones, enabling the model to focus on challenging translation cases. Experimental results across ten translation tasks with LLaMA-3-8B-Instruct and Mistral-Nemo-Instruct demonstrate the effectiveness of our approach.