🤖 AI Summary
This work addresses the long-standing oracle problem in software testing by proposing the first purely documentation-driven large language model (LLM) approach for automated test assertion generation—requiring only software specification documents and code comments, with no access to source code, and inferring expected behavior semantically. The method integrates retrieval-augmented generation (RAG) with four semantic prompt variants to systematically investigate how context design affects assertion accuracy. Evaluated on a Java bytecode mutation testing framework, the Extended Prompt variant achieves 30.0% assertion accuracy—significantly surpassing the state-of-the-art TOGA (8.2%). Its effectiveness is further validated on 203 real-world defect cases. The core contribution is the establishment of the “Document-as-Oracle” paradigm: a scalable, low-intrusion, automated testing solution particularly suited for black-box and legacy systems where source code is unavailable or inaccessible.
📝 Abstract
Automated test generation is crucial for ensuring the reliability and robustness of software applications while at the same time reducing the effort needed. While significant progress has been made in test generation research, generating valid test oracles still remains an open problem. To address this challenge, we present AugmenTest, an approach leveraging Large Language Models (LLMs) to infer correct test oracles based on available documentation of the software under test. Unlike most existing methods that rely on code, AugmenTest utilizes the semantic capabilities of LLMs to infer the intended behavior of a method from documentation and developer comments, without looking at the code. AugmenTest includes four variants: Simple Prompt, Extended Prompt, RAG with a generic prompt (without the context of class or method under test), and RAG with Simple Prompt, each offering different levels of contextual information to the LLMs. To evaluate our work, we selected 142 Java classes and generated multiple mutants for each. We then generated tests from these mutants, focusing only on tests that passed on the mutant but failed on the original class, to ensure that the tests effectively captured bugs. This resulted in 203 unique tests with distinct bugs, which were then used to evaluate AugmenTest. Results show that in the most conservative scenario, AugmenTest's Extended Prompt consistently outperformed the Simple Prompt, achieving a success rate of 30% for generating correct assertions. In comparison, the state-of-the-art TOGA approach achieved 8.2%. Contrary to our expectations, the RAG-based approaches did not lead to improvements, with performance of 18.2% success rate for the most conservative scenario.