🤖 AI Summary
To address two key challenges in robustness evaluation of LLM-based NLP software—insufficient coupling between testing methodologies and model behavior, and diminished fuzzing efficacy in NLG scenarios—this paper proposes BASFuzz. It is the first framework to treat prompt-example pairs as unified fuzzing targets. BASFuzz introduces a text-consistency metric to guide input mutation and innovatively integrates beam search with simulated annealing into a novel Beam-Annealing algorithm. Furthermore, it employs information entropy to adaptively modulate mutation intensity and incorporates an elitist preservation strategy to enhance search efficiency. Evaluated across six representative generation and understanding tasks, BASFuzz achieves 90.34% test effectiveness, outperforming the best baseline by an average speedup of 2,163.85 seconds, while significantly improving both defect detection rate and test coverage.
📝 Abstract
Fuzzing has shown great success in evaluating the robustness of intelligent natural language processing (NLP) software. As large language model (LLM)-based NLP software is widely deployed in critical industries, existing methods still face two main challenges: 1 testing methods are insufficiently coupled with the behavioral patterns of LLM-based NLP software; 2 fuzzing capability for the testing scenario of natural language generation (NLG) generally degrades. To address these issues, we propose BASFuzz, an efficient Fuzz testing method tailored for LLM-based NLP software. BASFuzz targets complete test inputs composed of prompts and examples, and uses a text consistency metric to guide mutations of the fuzzing loop, aligning with the behavioral patterns of LLM-based NLP software. A Beam-Annealing Search algorithm, which integrates beam search and simulated annealing, is employed to design an efficient fuzzing loop. In addition, information entropy-based adaptive adjustment and an elitism strategy further enhance fuzzing capability. We evaluate BASFuzz on six datasets in representative scenarios of NLG and natural language understanding (NLU). Experimental results demonstrate that BASFuzz achieves a testing effectiveness of 90.335% while reducing the average time overhead by 2,163.852 seconds compared to the current best baseline, enabling more effective robustness evaluation prior to software deployment.