🤖 AI Summary
Large language model (LLM)-integrated software systems lack systematic testing practices. Method: We conducted the first empirical study on testing LLM-driven systems, analyzing 99 authentic student-developed project reports using thematic analysis and structured coding, supplemented by exploratory testing, unit testing, and prompt iteration. Contribution/Results: We identify LLM-specific challenges—including prompt sensitivity and hallucination—and propose a hybrid verification paradigm integrating source-code logic with behavioral awareness. Our findings reveal fundamental limitations of conventional testing methods in generative AI systems, demonstrate the necessity of synergistic human-in-the-loop validation and automation, and establish prompt iteration as a core testing activity. This work introduces a novel paradigm and actionable methodology for testing in LLM-based software engineering.
📝 Abstract
Background: Software systems powered by large language models are becoming a routine part of everyday technologies, supporting applications across a wide range of domains. In software engineering, many studies have focused on how LLMs support tasks such as code generation, debugging, and documentation. However, there has been limited focus on how full systems that integrate LLMs are tested during development. Aims: This study explores how LLM-powered systems are tested in the context of real-world application development. Method: We conducted an exploratory case study using 99 individual reports written by students who built and deployed LLM-powered applications as part of a university course. Each report was independently analyzed using thematic analysis, supported by a structured coding process. Results: Testing strategies combined manual and automated methods to evaluate both system logic and model behavior. Common practices included exploratory testing, unit testing, and prompt iteration. Reported challenges included integration failures, unpredictable outputs, prompt sensitivity, hallucinations, and uncertainty about correctness. Conclusions: Testing LLM-powered systems required adaptations to traditional verification methods, blending source-level reasoning with behavior-aware evaluations. These findings provide evidence on the practical context of testing generative components in software systems.