π€ AI Summary
To address the low data efficiency and poor robustness of multilingual large language models (LLMs) in low-resource machine translation, this paper proposes a candidate-translation-guided post-editing paradigm. It reformulates translation as an integrated task comprising multilingual candidate evaluation, semantic alignment, selective copying, and error correction. Our method innovatively combines multilingual candidate inputs with instruction-tuned post-editing prompts, augmented by semantic alignment guidance and lightweight knowledge distillation. The approach enables zero-shot fault tolerance and substantially improves data utilization efficiency in low-resource settings. On the Flores-200 EnβXX benchmark, our method outperforms the NLLB-1.3B distilled baseline on 64% of low- and extremely low-resource language pairs. After distillation, the average chrF score increases by 3.1 points, while inference latency and computational overhead are significantly reduced.
π Abstract
Multilingual large language models (LLMs) are great translators, but this is largely limited to high-resource languages. For many LLMs, translating in and out of low-resource languages remains a challenging task. To maximize data efficiency in this low-resource setting, we introduce Mufu, which includes a selection of automatically generated multilingual candidates and an instruction to correct inaccurate translations in the prompt. Mufu prompts turn a translation task into a postediting one, and seek to harness the LLM's reasoning capability with auxiliary translation candidates, from which the model is required to assess the input quality, align the semantics cross-lingually, copy from relevant inputs and override instances that are incorrect. Our experiments on En-XX translations over the Flores-200 dataset show LLMs finetuned against Mufu-style prompts are robust to poor quality auxiliary translation candidates, achieving performance superior to NLLB 1.3B distilled model in 64% of low- and very-low-resource language pairs. We then distill these models to reduce inference cost, while maintaining on average 3.1 chrF improvement over finetune-only baseline in low-resource translations.