🤖 AI Summary
Large language models (LLMs) frequently exhibit “library hallucinations”—inventing non-existent libraries or library members—during code generation, yet the impact of realistic user prompt variations on this phenomenon remains poorly understood. Method: We systematically investigate how natural prompt perturbations—including typos, fictitious names, and time-sensitive phrasings—affect library hallucination rates across six state-of-the-art LLMs. Using a controlled experimental framework, we construct a prompt dataset grounded in real-world developer queries from Stack Overflow and other authentic sources. For the first time, we separately quantify hallucinations at both the library-name and library-member levels. Contribution/Results: Single-character typos induce hallucinations in up to 26% of tasks; fictitious library names are accepted with 99% frequency; time-sensitive prompts yield an 84% hallucination rate; and existing prompt-engineering mitigations prove largely ineffective and highly model-dependent. Our findings expose a structural vulnerability of LLMs to natural input variations in practical coding scenarios, providing empirical grounding for secure prompt design and robustness evaluation.
📝 Abstract
Large language models (LLMs) are increasingly used to generate code, yet they continue to hallucinate, often inventing non-existent libraries. Such library hallucinations are not just benign errors: they can mislead developers, break builds, and expose systems to supply chain threats such as slopsquatting. Despite increasing awareness of these risks, little is known about how real-world prompt variations affect hallucination rates. Therefore, we present the first systematic study of how user-level prompt variations impact library hallucinations in LLM-generated code. We evaluate six diverse LLMs across two hallucination types: library name hallucinations (invalid imports) and library member hallucinations (invalid calls from valid libraries). We investigate how realistic user language extracted from developer forums and how user errors of varying degrees (one- or multi-character misspellings and completely fake names/members) affect LLM hallucination rates. Our findings reveal systemic vulnerabilities: one-character misspellings in library names trigger hallucinations in up to 26% of tasks, fake library names are accepted in up to 99% of tasks, and time-related prompts lead to hallucinations in up to 84% of tasks. Prompt engineering shows promise for mitigating hallucinations, but remains inconsistent and LLM-dependent. Our results underscore the fragility of LLMs to natural prompt variation and highlight the urgent need for safeguards against library-related hallucinations and their potential exploitation.