🤖 AI Summary
To address the high inference cost of large language models (LLMs) and the performance limitations of smaller models in software development tasks, this paper proposes Patched MOA—a plug-and-play inference optimization method that requires no fine-tuning and does not rely on larger base models. Our key innovation lies in the first integration of Mixture of Agents (MOA) with a dynamic patching mechanism, yielding a model-agnostic, transparent multi-strategy coordination framework that synergistically combines Best-of-N sampling, MOA aggregation, and Monte Carlo Tree Search. Evaluated on the Arena-Hard-Auto benchmark, Patched MOA boosts the accuracy of gpt-4o-mini by 15.52%, enabling it to outperform gpt-4-turbo. Moreover, it significantly improves task completion rates across multiple realistic software development workflows. The implementation is publicly available.
📝 Abstract
This paper introduces Patched MOA (Mixture of Agents), an inference optimization technique that significantly enhances the performance of large language models (LLMs) across diverse software development tasks. We evaluate three inference optimization algorithms - Best of N, Mixture of Agents, and Monte Carlo Tree Search and demonstrate that Patched MOA can boost the performance of smaller models to surpass that of larger, more expensive models. Notably, our approach improves the gpt-4o-mini model's performance on the Arena-Hard-Auto benchmark by 15.52%, outperforming gpt-4-turbo at a fraction of the cost. We also apply Patched MOA to various software development workflows, showing consistent improvements in task completion rates. Our method is model-agnostic, transparent to end-users, and can be easily integrated into existing LLM pipelines. This work contributes to the growing field of LLM optimization, offering a cost-effective solution for enhancing model performance without the need for fine-tuning or larger models. Our implementation is open-source and available at https://github.com/codelion/optillm.