🤖 AI Summary
This work addresses the challenges in designing participatory budgeting rules, which are often hindered by high domain expertise requirements and the inherent trade-off between utility and fairness. To overcome these limitations, the paper introduces LLMRule, a novel framework that integrates large language models (LLMs) with evolutionary search to automatically design such rules. The approach models budget allocation as a knapsack problem, leverages LLMs to generate candidate rules, and employs an evolutionary algorithm to optimize them for both social welfare and fairness. Evaluated on over 600 real-world budgeting instances from the United States, Canada, Poland, and the Netherlands, LLMRule consistently yields rules that significantly outperform existing handcrafted rules in overall utility while maintaining comparable levels of fairness, thereby surpassing the constraints of traditional expert-dependent design methodologies.
📝 Abstract
Participatory budgeting (PB) is a democratic paradigm for deciding the funding of public projects given the residents'preferences, which has been adopted in numerous cities across the world. The main focus of PB is designing rules, functions that return feasible budget allocations for a set of projects subject to some budget constraint. Designing PB rules that optimize both utility and fairness objectives based on agent preferences had been challenging due to the extensive domain knowledge required and the proven trade-off between the two notions. Recently, large language models (LLMs) have been increasingly employed for automated algorithmic design. Given the resemblance of PB rules to algorithms for classical knapsack problems, in this paper, we introduce a novel framework, named LLMRule, that addresses the limitations of existing works by incorporating LLMs into an evolutionary search procedure for automating the design of PB rules. Our experimental results, evaluated on more than 600 real-world PB instances obtained from the U.S., Canada, Poland, and the Netherlands with different representations of agent preferences, demonstrate that the LLM-generated rules generally outperform existing handcrafted rules in terms of overall utility while still maintaining a similar degree of fairness.