🤖 AI Summary
This study addresses the high computational cost of full-parameter fine-tuning in non-functional requirement (NFR) classification. We propose a lightweight and efficient fine-tuning paradigm that innovatively integrates Low-Rank Adaptation (LoRA) with prompt learning. Specifically, trainable low-rank matrices are injected into a pre-trained language model via LoRA, while task-specific prompts are incorporated to guide inference—significantly reducing the number of updated parameters. Experiments demonstrate a 68% reduction in training cost compared to full-parameter fine-tuning, with only a 2–3 percentage-point drop in classification accuracy. The method exhibits strong robustness and generalization across large-scale datasets and large models. Our key contributions are: (1) the first application of LoRA to NFR classification; (2) empirical validation of an optimal accuracy–efficiency trade-off under low-resource fine-tuning; and (3) a scalable technical pathway for few-shot, computationally expensive NLP tasks in software requirements engineering.
📝 Abstract
Classifying Non-Functional Requirements (NFRs) in software development life cycle is critical. Inspired by the theory of transfer learning, researchers apply powerful pre-trained models for NFR classification. However, full fine-tuning by updating all parameters of the pre-trained models is often impractical due to the huge number of parameters involved (e.g., 175 billion trainable parameters in GPT-3). In this paper, we apply Low-Rank Adaptation (LoRA) finetuning approach into NFR classification based on prompt-based learning to investigate its impact. The experiments show that LoRA can significantly reduce the execution cost (up to 68% reduction) without too much loss of effectiveness in classification (only 2%-3% decrease). The results show that LoRA can be practical in more complicated classification cases with larger dataset and pre-trained models.