🤖 AI Summary
This study investigates in-context learning for machine translation (ICL-MT) in low-resource languages, using Manchu as a case study, focusing on how the quality and type of external resources—dictionaries, grammar books, and parallel example sentences—affect translation performance. To isolate and purify the ICL effect, we propose *encrypted language perturbation*, a novel method that suppresses pre-existing knowledge in large language models. We further design a resource-controllable injection strategy to structurally integrate prompt content. High-quality ICL outputs are then leveraged to construct synthetic parallel corpora, which bootstrap neural MT training. Experiments show dictionaries and parallel examples significantly improve translation quality, whereas grammar books yield marginal gains; the synthetic data alleviates data scarcity, boosting low-resource neural MT BLEU by +3.2. This work establishes a methodology and empirical benchmark for controllable, interpretable ICL-MT in resource-constrained settings.
📝 Abstract
In-context machine translation (MT) with large language models (LLMs) is a promising approach for low-resource MT, as it can readily take advantage of linguistic resources such as grammar books and dictionaries. Such resources are usually selectively integrated into the prompt so that LLMs can directly perform translation without any specific training, via their in-context learning capability (ICL). However, the relative importance of each type of resource e.g., dictionary, grammar book, and retrieved parallel examples, is not entirely clear. To address this gap, this study systematically investigates how each resource and its quality affects the translation performance, with the Manchu language as our case study. To remove any prior knowledge of Manchu encoded in the LLM parameters and single out the effect of ICL, we also experiment with an encrypted version of Manchu texts. Our results indicate that high-quality dictionaries and good parallel examples are very helpful, while grammars hardly help. In a follow-up study, we showcase a promising application of in-context MT: parallel data augmentation as a way to bootstrap the conventional MT model. When monolingual data abound, generating synthetic parallel data through in-context MT offers a pathway to mitigate data scarcity and build effective and efficient low-resource neural MT systems.