Improving LLM Abilities in Idiomatic Translation

📅 2024-07-03
🏛️ arXiv.org
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently exhibit semantic inaccuracy, stylistic distortion, and loss of cultural imagery in idiom translation, impeding cross-cultural comprehension. To address these challenges, we propose a dual-path idiom alignment framework: (1) Sentence-BERT–based semantic similarity matching for precise retrieval of target-language idioms corresponding to source-language idioms; and (2) LLM-driven generation of target-language idioms, overcoming the limitation of conventional knowledge bases that provide only source-language definitions. Our method achieves statistically significant improvements over baseline systems and pure LLM-based generation in human evaluations on English–Chinese bilingual idiom translation. Furthermore, we extend IdiomKB—the first large-scale idiom knowledge base—to low-resource Urdu, constructing a high-quality bilingual idiom parallel dataset. This work establishes a scalable, interpretable, and culturally adaptive paradigm for cross-lingual idiom translation.

Technology Category

Application Category

📝 Abstract
For large language models (LLMs) like NLLB and GPT, translating idioms remains a challenge. Our goal is to enhance translation fidelity by improving LLM processing of idiomatic language while preserving the original linguistic style. This has a significant social impact, as it preserves cultural nuances and ensures translated texts retain their intent and emotional resonance, fostering better cross-cultural communication. Previous work has utilized knowledge bases like IdiomKB by providing the LLM with the meaning of an idiom to use in translation. Although this method yielded better results than a direct translation, it is still limited in its ability to preserve idiomatic writing style across languages. In this research, we expand upon the knowledge base to find corresponding idioms in the target language. Our research performs translations using two methods: The first method employs the SentenceTransformers model to semantically generate cosine similarity scores between the meanings of the original and target language idioms, selecting the best idiom (Cosine Similarity method). The second method uses an LLM to find a corresponding idiom in the target language for use in the translation (LLM-generated idiom method). As a baseline, we performed a direct translation without providing additional information. Human evaluations on the English ->Chinese, and Chinese ->English show the Cosine Similarity Lookup method out-performed others in all GPT4o translations. To further build upon IdiomKB, we developed a low-resource Urdu dataset containing Urdu idioms and their translations. Despite dataset limitations, the Cosine Similarity Lookup method shows promise, potentially overcoming language barriers and enabling the exploration of diverse literary works in Chinese and Urdu.(LoResLM @ COLING Preprint)
Problem

Research questions and friction points this paper is trying to address.

idiom translation
cultural understanding
accuracy issues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Improve Idiom Translation
Similarity Approach
Urdu Idiom Dataset
S
Sundesh Donthi
Algoverse AI Research
M
Maximilian Spencer
Algoverse AI Research
Om Patel
Om Patel
Children's Hospital of Philadelphia
Machine Learning and Biology
J
Joon Doh
Algoverse AI Research
E
Eid Rodan
Algoverse AI Research