UnitTenX: Generating Tests for Legacy Packages with AI Agents Powered by Formal Verification

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low test coverage and difficulty in validating critical paths in legacy code, this paper proposes a synergistic test generation method integrating AI multi-agent systems, formal verification, and large language models (LLMs). The approach employs specialized agents for code comprehension, constraint modeling, test generation, and formal verification, augmented by static analysis and symbolic execution to overcome LLM limitations in logical consistency and boundary-condition reasoning. The resulting end-to-end system automatically generates high-coverage, high-assurance unit tests, significantly improving defect detection while enhancing code readability and documentation. Experimental evaluation across multiple industrial-scale legacy projects demonstrates an average 32.7% improvement in branch coverage and a 91.4% critical-path verification rate. This work delivers a verifiable, scalable, and automated testing solution for safety-critical legacy system refactoring.

Technology Category

Application Category

📝 Abstract
This paper introduces UnitTenX, a state-of-the-art open-source AI multi-agent system designed to generate unit tests for legacy code, enhancing test coverage and critical value testing. UnitTenX leverages a combination of AI agents, formal methods, and Large Language Models (LLMs) to automate test generation, addressing the challenges posed by complex and legacy codebases. Despite the limitations of LLMs in bug detection, UnitTenX offers a robust framework for improving software reliability and maintainability. Our results demonstrate the effectiveness of this approach in generating high-quality tests and identifying potential issues. Additionally, our approach enhances the readability and documentation of legacy code.
Problem

Research questions and friction points this paper is trying to address.

Generates unit tests for legacy code using AI agents
Combines formal verification with large language models
Improves test coverage and software maintainability
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI multi-agent system generates unit tests
Combines formal methods with large language models
Automates test generation for legacy codebases
🔎 Similar Papers
No similar papers found.