🤖 AI Summary
Large language models (LLMs) frequently exhibit “package hallucination” in code generation—recommending non-existent or invalid software packages—posing severe supply-chain security risks. Existing hallucination detection methods primarily target factual inconsistencies in natural language and lack code-ecosystem-specific testing frameworks. This paper introduces Phrazor, the first phrase-guided fuzzing framework for detecting package hallucinations. Phrazor extracts semantically relevant phrases from package documentation and programming tasks to generate highly relevant, diverse code scenarios that systematically expose hallucinations in dependency declarations and environment configurations. Experimental evaluation across multiple state-of-the-art LLMs demonstrates that Phrazor significantly improves hallucination detection capability: it identifies 46 unique hallucinated packages in GPT-4o and detects 2.60× more distinct hallucinated packages than baseline mutation-based fuzzing approaches.
📝 Abstract
Large Language Models (LLMs) are widely used for code generation, but they face critical security risks when applied to practical production due to package hallucinations, in which LLMs recommend non-existent packages. These hallucinations can be exploited in software supply chain attacks, where malicious attackers exploit them to register harmful packages. It is critical to test LLMs for package hallucinations to mitigate package hallucinations and defend against potential attacks. Although researchers have proposed testing frameworks for fact-conflicting hallucinations in natural language generation, there is a lack of research on package hallucinations. To fill this gap, we propose HFUZZER, a novel phrase-based fuzzing framework to test LLMs for package hallucinations. HFUZZER adopts fuzzing technology and guides the model to infer a wider range of reasonable information based on phrases, thereby generating enough and diverse coding tasks. Furthermore, HFUZZER extracts phrases from package information or coding tasks to ensure the relevance of phrases and code, thereby improving the relevance of generated tasks and code. We evaluate HFUZZER on multiple LLMs and find that it triggers package hallucinations across all selected models. Compared to the mutational fuzzing framework, HFUZZER identifies 2.60x more unique hallucinated packages and generates more diverse tasks. Additionally, when testing the model GPT-4o, HFUZZER finds 46 unique hallucinated packages. Further analysis reveals that for GPT-4o, LLMs exhibit package hallucinations not only during code generation but also when assisting with environment configuration.