🤖 AI Summary
In software testing, developers predominantly focus on “happy-path” scenarios, leading to severe under-testing of exception behaviors (EBTs). To address this, we propose the first LLM-driven EBT generation framework that jointly leverages exception-throwing trace inference, guard-condition analysis of `throw` statements, and guidance from similar non-exceptional tests. Built upon a fine-tuned CodeLlama model, our approach integrates static program analysis with multi-stage, context-aware prompting to generate semantically meaningful and highly executable exception-path tests. Evaluated across multiple open-source Java and Python projects, our tool—exLong—improves real-world exception coverage by 3.2× over baseline methods, achieves an average test pass rate exceeding 78%, and is positively assessed by developers as a practical, production-ready testing aid.
📝 Abstract
Exceptional behavior tests (EBTs) are crucial in software development for verifying that code correctly handles unwanted events and throws appropriate exceptions. However, prior research has shown that developers often prioritize testing"happy paths", e.g., paths without unwanted events over exceptional scenarios. We present exLong, a framework that automatically generates EBTs to address this gap. exLong leverages a large language model (LLM) fine-tuned from CodeLlama and incorporates reasoning about exception-throwing traces, conditional expressions that guard throw statements, and non-exceptional behavior tests that execute similar traces. Our demonstration video illustrates how exLong can effectively assist developers in creating comprehensive EBTs for their project (available at https://youtu.be/Jro8kMgplZk).