A Study to Evaluate the Impact of LoRA Fine-Tuning on the Performance of Non-Functional Requirements Classification

📅 2025-02-22
🏛️ Artificial Intelligence, Soft Computing And Application Trends 2025
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the high computational cost of full-parameter fine-tuning in non-functional requirement (NFR) classification. We propose a lightweight and efficient fine-tuning paradigm that innovatively integrates Low-Rank Adaptation (LoRA) with prompt learning. Specifically, trainable low-rank matrices are injected into a pre-trained language model via LoRA, while task-specific prompts are incorporated to guide inference—significantly reducing the number of updated parameters. Experiments demonstrate a 68% reduction in training cost compared to full-parameter fine-tuning, with only a 2–3 percentage-point drop in classification accuracy. The method exhibits strong robustness and generalization across large-scale datasets and large models. Our key contributions are: (1) the first application of LoRA to NFR classification; (2) empirical validation of an optimal accuracy–efficiency trade-off under low-resource fine-tuning; and (3) a scalable technical pathway for few-shot, computationally expensive NLP tasks in software requirements engineering.

Technology Category

Application Category

📝 Abstract
Classifying Non-Functional Requirements (NFRs) in software development life cycle is critical. Inspired by the theory of transfer learning, researchers apply powerful pre-trained models for NFR classification. However, full fine-tuning by updating all parameters of the pre-trained models is often impractical due to the huge number of parameters involved (e.g., 175 billion trainable parameters in GPT-3). In this paper, we apply Low-Rank Adaptation (LoRA) finetuning approach into NFR classification based on prompt-based learning to investigate its impact. The experiments show that LoRA can significantly reduce the execution cost (up to 68% reduction) without too much loss of effectiveness in classification (only 2%-3% decrease). The results show that LoRA can be practical in more complicated classification cases with larger dataset and pre-trained models.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LoRA fine-tuning impact on NFR classification performance.
Reduces execution cost by 68% with minimal effectiveness loss.
Demonstrates LoRA practicality for complex, large-scale classification tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LoRA fine-tuning reduces execution cost significantly
LoRA applied in NFR classification via prompt-based learning
LoRA maintains effectiveness with minimal performance loss
🔎 Similar Papers
No similar papers found.
X
Xia Li
The Department of Software Engineering and Game Design and Development, Kennesaw State University, Marietta, USA
Allen Kim
Allen Kim
PhD
Computer Science