π€ AI Summary
Large language models (LLMs) suffer from low inference efficiency and redundant transformer layers that hinder task-specific representation learning. Method: We propose TALEβa training-free, task-aware transformer layer pruning algorithm for inference-time adaptation. TALE jointly quantifies layer importance via mutual information and gradient-based metrics, dynamically identifying and removing bottleneck layers that impede task performance, while enabling adjustable accuracy-efficiency trade-offs. Results: Evaluated across five mainstream LLMs (LLaMA, Qwen, Mistral, etc.) and nine NLP tasks under zero-shot and few-shot settings, TALE reduces model size and accelerates inference, while improving average accuracy. Fine-tuning convergence speed also increases significantly. Crucially, TALE is the first to empirically reveal the phenomenon of βlayer-wise suppression of task-relevant representations,β establishing a novel paradigm for efficient LLM adaptation without architectural or training modifications.
π Abstract
In this paper we introduce Tale, Task-Aware Layer Elimination, an inference-time algorithm that prunes entire transformer layers in an LLM by directly optimizing task-specific validation performance. We evaluate TALE on 9 tasks and 5 models, including LLaMA 3.1 8B, Qwen 2.5 7B, Qwen 2.5 0.5B, Mistral 7B, and Lucie 7B, under both zero-shot and few-shot settings. Unlike prior approaches, TALE requires no retraining and consistently improves accuracy while reducing computational cost across all benchmarks. Furthermore, applying TALE during finetuning leads to additional performance gains. Finally, TALE provides flexible user control over trade-offs between accuracy and efficiency. Mutual information analysis shows that certain layers act as bottlenecks, degrading task-relevant representations. Tale's selective layer removal remedies this problem, producing smaller, faster, and more accurate models that are also faster to fine-tune while offering new insights into transformer interpretability.