🤖 AI Summary
Current monolithic large language models (LLMs) struggle to simultaneously ensure correctness and runtime efficiency in cross-lingual and multi-stage code generation (e.g., generation, repair, optimization). This paper proposes a multi-stage performance-guided LLM collaboration framework. We empirically characterize the heterogeneous performance of 17 mainstream LLMs across five programming languages (including Python and Java) and three code-generation stages—revealing significant disparities for the first time. Our method introduces a plug-and-play, fine-tuning-free collaboration mechanism that enables adaptive task routing via stage-wise validation and controlled rollback. Evaluated on HumanEval-X and EffiBench-X, our framework achieves correctness rates of 96.22% and 91.37%, respectively—surpassing GPT-4o—and reduces execution time for 58.76% of tasks, with median speedups of 17.67%–27.66%. The core contribution is the establishment of the first runtime-performance-oriented multi-model collaboration paradigm for code generation.
📝 Abstract
While Large Language Models (LLMs) have become the predominant paradigm for automated code generation, current single-model approaches fundamentally ignore the heterogeneous computational strengths that different models exhibit across programming languages, algorithmic domains, and development stages. This paper challenges the single-model convention by introducing a multi-stage, performance-guided orchestration framework that dynamically routes coding tasks to the most suitable LLMs within a structured generate-fix-refine workflow. Our approach is grounded in a comprehensive empirical study of 17 state-of-the-art LLMs across five programming languages (Python, Java, C++, Go, and Rust) using HumanEval-X benchmark. The study, which evaluates both functional correctness and runtime performance metrics (execution time, mean/max memory utilization, and CPU efficiency), reveals pronounced performance heterogeneity by language, development stage, and problem category. Guided by these empirical insights, we present PerfOrch, an LLM agent that orchestrates top-performing LLMs for each task context through stage-wise validation and rollback mechanisms. Without requiring model fine-tuning, PerfOrch achieves substantial improvements over strong single-model baselines: average correctness rates of 96.22% and 91.37% on HumanEval-X and EffiBench-X respectively, surpassing GPT-4o's 78.66% and 49.11%. Beyond correctness gains, the framework delivers consistent performance optimizations, improving execution time for 58.76% of problems with median speedups ranging from 17.67% to 27.66% across languages on two benchmarks. The framework's plug-and-play architecture ensures practical scalability, allowing new LLMs to be profiled and integrated seamlessly, thereby offering a paradigm for production-grade automated software engineering that adapts to the rapidly evolving generative AI landscape.