🤖 AI Summary
This work addresses the multi-facility location mechanism design problem, aiming to synthesize strategyproof, low-social-cost, and highly interpretable mechanisms automatically. To overcome limitations of manual design—such as heavy reliance on domain expertise—and of deep learning approaches—including opacity and poor generalization—we propose the first framework that integrates large language models (LLMs) into an evolutionary algorithm. Our method employs a formal mechanism description language and a strategyproofness verification module, enabling end-to-end mechanism synthesis without supervised training, hyperparameter tuning, or handcrafted feature engineering. Evaluated under weighted social cost objectives and non-uniform preference distributions, our approach significantly outperforms both hand-designed baselines and deep learning models. Moreover, it demonstrates strong robustness and generalization to out-of-distribution preferences and large-scale instances, establishing a new paradigm for transparent, principled, and scalable mechanism design.
📝 Abstract
Designing strategyproof mechanisms for multi-facility location that optimize social costs based on agent preferences had been challenging due to the extensive domain knowledge required and poor worst-case guarantees. Recently, deep learning models have been proposed as alternatives. However, these models require some domain knowledge and extensive hyperparameter tuning as well as lacking interpretability, which is crucial in practice when transparency of the learned mechanisms is mandatory. In this paper, we introduce a novel approach, named LLMMech, that addresses these limitations by incorporating large language models (LLMs) into an evolutionary framework for generating interpretable, hyperparameter-free, empirically strategyproof, and nearly optimal mechanisms. Our experimental results, evaluated on various problem settings where the social cost is arbitrarily weighted across agents and the agent preferences may not be uniformly distributed, demonstrate that the LLM-generated mechanisms generally outperform existing handcrafted baselines and deep learning models. Furthermore, the mechanisms exhibit impressive generalizability to out-of-distribution agent preferences and to larger instances with more agents.