Large Language Models for Automated Web-Form-Test Generation: An Empirical Study

πŸ“… 2024-05-16
πŸ›οΈ ACM Transactions on Software Engineering and Methodology
πŸ“ˆ Citations: 4
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Complex HTML form structures impede automatic contextual information extraction, hindering effective test case generation for web forms. Method: This work presents the first systematic evaluation of 11 large language models (e.g., GPT-4, GLM-4, Baichuan2) for web form test generation. We propose three HTML structural pruning techniques to improve context fidelity and introduce a structured HTML prompting methodβ€”Parser-Processed HTML Prompting (PH-P)β€”to enhance model comprehension of form semantics. Contribution/Results: Empirical evaluation on 146 open-source Java web forms shows that GPT-4 with PH-P achieves a 99.54% submission success rate. PH-P attains a mean success rate of 70.63%, significantly outperforming Raw HTML (60.21%) and LLM-Processed HTML (50.27%). This study establishes a reproducible methodology and empirical benchmark for LLM-driven web automation testing.

Technology Category

Application Category

πŸ“ Abstract
Testing web forms is an essential activity for ensuring the quality of web applications. It typically involves evaluating the interactions between users and forms. Automated test-case generation remains a challenge for web-form testing: Due to the complex, multi-level structure of web pages, it can be difficult to automatically capture their inherent contextual information for inclusion in the tests. Large Language Models (LLMs) have shown great potential for contextual text generation. This motivated us to explore how they could generate automated tests for web forms, making use of the contextual information within form elements. To the best of our knowledge, no comparative study examining different LLMs has yet been reported for web-form-test generation. To address this gap in the literature, we conducted a comprehensive empirical study investigating the effectiveness of 11 LLMs on 146 web forms from 30 open-source Java web applications. In addition, we propose three HTML-structure-pruning methods to extract key contextual information. The experimental results show that different LLMs can achieve different testing effectiveness, with the GPT-4, GLM-4, and Baichuan2 LLMs generating the best web-form tests. Compared with GPT-4, the other LLMs had difficulty generating appropriate tests for the web forms: Their successfully-submitted rates (SSRs) β€” the proportions of the LLMs-generated web-form tests that could be successfully inserted into the web forms and submitted β€” decreased by 9.10% to 74.15%. Our findings also show that, for all LLMs, when the designed prompts include complete and clear contextual information about the web forms, more effective web-form tests were generated. Specifically, when using Parser-Processed HTML for Task Prompt (PH-P), the SSR averaged 70.63%, higher than the 60.21% for Raw HTML for Task Prompt (RH-P) and 50.27% for LLM-Processed HTML for Task Prompt (LH-P). With RH-P, GPT-4’s SSR was 98.86%, outperforming models like LLaMa2 (7B) with 34.47% and GLM-4V with 0%. Similarly, with PH-P, GPT-4 reached an SSR of 99.54%, the highest among all models and prompt types. Finally, this paper also highlights strategies for selecting LLMs based on performance metrics, and for optimizing the prompt design to improve the quality of the web-form tests.
Problem

Research questions and friction points this paper is trying to address.

Automated test generation for web forms using LLMs
Comparing effectiveness of 11 LLMs on 146 web forms
Optimizing prompt design with HTML-structure-pruning methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using LLMs for automated web-form test generation
Proposing HTML-structure-pruning methods for context extraction
Optimizing prompt design with contextual information
πŸ”Ž Similar Papers
No similar papers found.
T
Tao Li
School of Computer Science and Engineering, Macau University of Science and Technology, Taipa, Macau 999078, China
C
Chenhui Cui
Rubing Huang
Rubing Huang
Macau University of Science and Technology
AI for Software EngineeringSoftware Engineering for AISoftware TestingAI Applications
Dave Towey
Dave Towey
University of Nottingham Ningbo China
Software TestingMetamorphic TestingAdaptive Random TestingTechnology-enhanced Learning and InstructionComputer Literacy
L
Lei Ma
The University of Tokyo, Tokyo 113-8654, Japan, and also with the University of Alberta, Edmonton, AB T6G 2R3, Canada