Large Language Models for Multi-Facility Location Mechanism Design

📅 2025-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the multi-facility location mechanism design problem, aiming to synthesize strategyproof, low-social-cost, and highly interpretable mechanisms automatically. To overcome limitations of manual design—such as heavy reliance on domain expertise—and of deep learning approaches—including opacity and poor generalization—we propose the first framework that integrates large language models (LLMs) into an evolutionary algorithm. Our method employs a formal mechanism description language and a strategyproofness verification module, enabling end-to-end mechanism synthesis without supervised training, hyperparameter tuning, or handcrafted feature engineering. Evaluated under weighted social cost objectives and non-uniform preference distributions, our approach significantly outperforms both hand-designed baselines and deep learning models. Moreover, it demonstrates strong robustness and generalization to out-of-distribution preferences and large-scale instances, establishing a new paradigm for transparent, principled, and scalable mechanism design.

Technology Category

Application Category

📝 Abstract
Designing strategyproof mechanisms for multi-facility location that optimize social costs based on agent preferences had been challenging due to the extensive domain knowledge required and poor worst-case guarantees. Recently, deep learning models have been proposed as alternatives. However, these models require some domain knowledge and extensive hyperparameter tuning as well as lacking interpretability, which is crucial in practice when transparency of the learned mechanisms is mandatory. In this paper, we introduce a novel approach, named LLMMech, that addresses these limitations by incorporating large language models (LLMs) into an evolutionary framework for generating interpretable, hyperparameter-free, empirically strategyproof, and nearly optimal mechanisms. Our experimental results, evaluated on various problem settings where the social cost is arbitrarily weighted across agents and the agent preferences may not be uniformly distributed, demonstrate that the LLM-generated mechanisms generally outperform existing handcrafted baselines and deep learning models. Furthermore, the mechanisms exhibit impressive generalizability to out-of-distribution agent preferences and to larger instances with more agents.
Problem

Research questions and friction points this paper is trying to address.

Design strategyproof mechanisms for multi-facility location optimization.
Overcome limitations of deep learning models in interpretability and tuning.
Generate interpretable, hyperparameter-free mechanisms using large language models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses large language models for mechanism design
Evolutionary framework enhances interpretability and strategyproofness
Outperforms traditional and deep learning models
🔎 Similar Papers
No similar papers found.