๐ค AI Summary
Existing LoRA research primarily focuses on parameter compression or architectural optimization, overlooking the heterogeneous importance of LoRA modules across layers during inference. Method: This work systematically reveals the non-uniform importance of LoRA layers in inference, identifying that lower-layer modules contribute significantly more to model understanding and prediction capability. We introduce the concept of a โboundary layerโโall LoRA modules at or below this layer are essential for inference, while higher-layer modules can be safely pruned. We design a validation-set-driven mechanism to locate the boundary layer and dynamically prune LoRA structures during inference. Results: Experiments across three strong backbone models (LLaMA-2, Qwen, Phi-3) and four text generation benchmarks demonstrate that our method reduces LoRA parameters by 28.6% on average while consistently improving generation quality (BLEU +1.4, ROUGE-L +0.9), validating both the efficacy and generalizability of selectively retaining critical layers.
๐ Abstract
Current research on LoRA primarily focuses on minimizing the number of fine-tuned parameters or optimizing its architecture. However, the necessity of all fine-tuned LoRA layers during inference remains underexplored. In this paper, we investigate the contribution of each LoRA layer to the model's ability to predict the ground truth and hypothesize that lower-layer LoRA modules play a more critical role in model reasoning and understanding. To address this, we propose a simple yet effective method to enhance the performance of large language models (LLMs) fine-tuned with LoRA. Specifically, we identify a ``boundary layer'' that distinguishes essential LoRA layers by analyzing a small set of validation samples. During inference, we drop all LoRA layers beyond this boundary. We evaluate our approach on three strong baselines across four widely-used text generation datasets. Our results demonstrate consistent and significant improvements, underscoring the effectiveness of selectively retaining critical LoRA layers during inference.