Industrial LLM-based Code Optimization under Regulation: A Mixture-of-Agents Approach

๐Ÿ“… 2025-08-05
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Regulatory constraints on data privacy hinder the adoption of commercial large language models (LLMs) for code optimization in highly regulated industries. Method: This paper proposes a Mixture-of-Agents (MoA) architecture built exclusively on lightweight, open-source LLMsโ€”marking the first application of MoA to end-to-end, industrial-grade code performance engineering automation under strict compliance requirements. Contribution/Results: Evaluated on 50 real-world industrial code snippets and seven model configurations, the system generated over 8,700 optimized variants. Compared to genetic algorithms and single-LLM baselines, it achieves average reductions of 14.3%โ€“22.2% in computational cost and improvements of 28.6%โ€“32.2% in optimization efficiency. The paper further provides a reproducible, regulatory-compliant deployment guide, empirically validating MoAโ€™s substantial advantages and engineering feasibility when instantiated with open-source models.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent advancements in Large Language Models (LLMs) for code optimization have enabled industrial platforms to automate software performance engineering at unprecedented scale and speed. Yet, organizations in regulated industries face strict constraints on which LLMs they can use - many cannot utilize commercial models due to data privacy regulations and compliance requirements, creating a significant challenge for achieving high-quality code optimization while maintaining cost-effectiveness. We address this by implementing a Mixture-of-Agents (MoA) approach that directly synthesizes code from multiple specialized LLMs, comparing it against TurinTech AI's vanilla Genetic Algorithm (GA)-based ensemble system and individual LLM optimizers using real-world industrial codebases. Our key contributions include: (1) First MoA application to industrial code optimization using real-world codebases; (2) Empirical evidence that MoA excels with open-source models, achieving 14.3% to 22.2% cost savings and 28.6% to 32.2% faster optimization times for regulated environments; (3) Deployment guidelines demonstrating GA's advantage with commercial models while both ensembles outperform individual LLMs; and (4) Real-world validation across 50 code snippets and seven LLM combinations, generating over 8,700 variants, addresses gaps in industrial LLM ensemble evaluation. This provides actionable guidance for organizations balancing regulatory compliance with optimization performance in production environments.
Problem

Research questions and friction points this paper is trying to address.

Optimizing code under strict regulatory constraints
Balancing cost-effectiveness with high-quality code optimization
Comparing Mixture-of-Agents with Genetic Algorithm and individual LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture-of-Agents synthesizes code from multiple LLMs
Open-source MoA achieves cost savings and speed
Validated on 50 snippets with 8,700 variants
๐Ÿ”Ž Similar Papers