TALL -- A Trainable Architecture for Enhancing LLM Performance in Low-Resource Languages

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the performance degradation of large language models (LLMs) on low-resource languages due to insufficient training data, this paper proposes TALL: a framework that tightly couples an LLM with a bilingual translation model. TALL maps low-resource language inputs into a high-resource semantic space and recovers language-specific characteristics via a learnable dimension-alignment layer and a lightweight, customized Transformer. It achieves, for the first time, end-to-end joint optimization of LLMs and translation models under a parameter-efficient paradigm—freezing the LLM backbone while training only lightweight adapters. Evaluated on multiple tasks in Hebrew—a representative low-resource language—TALL substantially outperforms baselines including direct inference, naive translate-then-backtranslate, and full fine-tuning. Results demonstrate the effectiveness and generalizability of cross-lingual representation transfer, validating both architectural design and optimization strategy.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) excel in high-resource languages but struggle with low-resource languages due to limited training data. This paper presents TALL (Trainable Architecture for Enhancing LLM Performance in Low-Resource Languages), which integrates an LLM with two bilingual translation models. TALL transforms low-resource inputs into high-resource representations, leveraging the LLM's capabilities while preserving linguistic features through dimension alignment layers and custom transformers. Our experiments on Hebrew demonstrate significant improvements over several baselines, including direct use, naive translation, and fine-tuning approaches. The architecture employs a parameter-efficient strategy, freezing pre-trained components while training only lightweight adapter modules, balancing computational efficiency with performance gains.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM performance for low-resource languages
Overcoming limited training data in low-resource languages
Balancing computational efficiency with performance gains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates LLM with bilingual translation models
Uses dimension alignment and custom transformers
Employs parameter-efficient lightweight adapter modules
🔎 Similar Papers
No similar papers found.