🤖 AI Summary
This work addresses the insufficient correctness guarantees in program synthesis by proposing a collaborative synthesis framework integrating dynamic multi-agent workflows with an LLM-based quality checker. Methodologically, it establishes a closed-loop collaboration among code generation, test execution, and self-debugging agents, and introduces the first LLM quality checker that explicitly models program execution traces to assess test compliance in real time—enabling dynamic submission, issue clarification, and step-level backtracking—augmented by diverse prompting and quality-feedback-driven adaptive decision-making. The key contribution is the first integration of dynamic execution-aware quality verification into the synthesis pipeline, enabling fine-grained procedural control. Empirically, the approach achieves state-of-the-art performance on MBPP, HumanEval, and EvalPlus, significantly outperforming static workflows and zero-shot one-shot synthesis baselines.
📝 Abstract
We introduce QualityFlow, a dynamic agentic workflow for program synthesis. Given the English description of a programming problem and a set of unit tests, the model's goal is to synthesize the correct program that solves the problem and passes the tests. QualityFlow consists of multiple large language model (LLM) agents that resemble a software development team, including code generation, testing, and self-debugging. Existing program synthesis methods face three major limitations: assumption of visible unit test conformity, bottleneck of synthesized test quality, and deviation of self-debugging trajectory. To address them, we propose the LLM Quality Checker, which explicitly"imagines"whether the synthesized programs' execution would conform to the unit tests. The Quality Checks dynamically control the workflow, including actions to submit the final answer, clarify the problem statement, and revert previous workflow steps. As a result, our Quality Checker can precisely accept any correct program, mitigate faulty synthesized tests, and prevent potential workflow deviation. The success of the Quality Checker further enables Diversified Prompting, which encourages variations in LLM responses to maximize the possibility that a correct program appears and passes the quality check. In experiments, QualityFlow establishes the state-of-the-art results on four program synthesis benchmarks: MBPP, HumanEval, and the stricter evaluations of both MBPP and HumanEval from EvalPlus. Our systematic analysis shows that the dynamic workflow controlled by LLM quality checks can outperform static workflows and single-attempt zero-shot synthesis. The Quality Checker is the center of our investigation, and we dissect its individual performance and integrated impact on the workflow accuracy, as well as other ablations experiments to justify our workflow design.