🤖 AI Summary
This work addresses the problem of redundant and inefficient code generation by large language models (LLMs). To tackle this, we propose an end-to-end code optimization framework driven by test-case minimization. Our core method formulates test-suite minimization as a Quadratic Unconstrained Binary Optimization (QUBO) problem—enabling seamless integration with both classical and quantum solvers—and synergistically combines LLM-generated test cases with hybrid quantum annealing (QA) and simulated annealing (SA) for combinatorial optimization. This represents the first integration of generative AI with QUBO-based optimization in code refinement. Experimental results demonstrate that QA achieves a 16× speedup over SA in solving the QUBO formulation. Overall, our framework reduces token consumption by 36.5%, significantly enhancing both the conciseness and functional correctness of generated code.
📝 Abstract
Precisely controlling Large Language Models (LLMs) to generate efficient and concise code is a central challenge in software engineering. We introduce a framework based on Test-Driven Development (TDD) that transforms code specification into a combinatorial optimization task. The framework first prompts an LLM to generate a test suite, then formulates the Test Case Minimization (TCM) problem as a Quadratic Unconstrained Binary Optimization (QUBO) model. This QUBO paradigm is compatible with both classical solvers and emerging hardware such as quantum annealers. Experimentally, quantum annealing solves the core TCM task 16 times faster than simulated annealing. This performance underpins our end-to-end framework, which reduces total token consumption by 36.5% and significantly improves code quality. This work demonstrates a powerful synergy between generative AI and combinatorial optimization in software engineering, highlighting the critical importance of precise model formulation.