Iterative Layer Pruning for Efficient Translation Inference

📅 2025-10-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low deployment efficiency and high computational overhead of large language models (LLMs) in machine translation, this paper proposes an iterative structured pruning method based on layer-wise importance analysis. The method dynamically quantifies each Transformer layer’s contribution to translation performance, guiding layer-by-layer pruning and subsequent fine-tuning to achieve adaptive structural compression. Evaluated on the Aya-Expanse-8B base model for Czech→German and English→Egyptian Arabic translation tasks, the approach reduces parameter count by up to 42%, accelerates inference by 1.8×, and incurs only a marginal BLEU degradation of 0.3–0.6 points—significantly outperforming uniform and static pruning baselines. The core innovation lies in tightly coupling layer importance quantification with iterative optimization, enabling, for the first time, high-fidelity and high-efficiency lightweighting of open-source multilingual LLMs for translation.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have transformed many areas of natural language processing, including machine translation. However, efficient deployment of LLMs remains challenging due to their intensive computational requirements. In this paper, we address this challenge and present our submissions to the Model Compression track at the Conference on Machine Translation (WMT 2025). In our experiments, we investigate iterative layer pruning guided by layer importance analysis. We evaluate this method using the Aya-Expanse-8B model for translation from Czech to German, and from English to Egyptian Arabic. Our approach achieves substantial reductions in model size and inference time, while maintaining the translation quality of the baseline models.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational requirements for large language models
Maintaining translation quality while compressing model size
Optimizing inference efficiency through iterative layer pruning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative layer pruning guided by importance analysis
Reduces model size while maintaining translation quality
Achieves substantial inference time reduction for LLMs
🔎 Similar Papers
No similar papers found.
Y
Yasmin Moslem
ADAPT Centre, Trinity College Dublin, Dublin, Ireland
M
Muhammad Hazim Al Farouq
Kreasof AI Research Labs, Jakarta, Indonesia
John D. Kelleher
John D. Kelleher
Professor of Computer Science, Trinity College Dublin, Ireland
Machine LearningNatural Language ProcessingPrecision MedicineArtificial IntelligenceData