🤖 AI Summary
Existing benchmarks for preference-conditioned multi-objective reinforcement learning (PCPL) are largely confined to simple, static environments, lacking the complexity and scalability of real-world scenarios. To address this limitation, this work proposes GraphAllocBench—a novel sandbox environment named CityPlannerEnv grounded in graph-structured urban resource allocation—that supports dynamic preferences, diverse objective functions, and high-dimensional scalability. The benchmark introduces two new evaluation metrics: Preference-Normalized Distance Score (PNDS) and Optimality Satisfaction (OS), which enable more accurate assessment of preference alignment. Empirical results demonstrate that current multi-objective reinforcement learning (MORL) methods exhibit limited performance in this environment, whereas a graph-aware policy integrating graph neural networks (GNNs) with multilayer perceptrons (MLPs) shows markedly stronger adaptability, thereby validating the benchmark’s effectiveness and challenge.
📝 Abstract
Preference-Conditioned Policy Learning (PCPL) in Multi-Objective Reinforcement Learning (MORL) aims to approximate diverse Pareto-optimal solutions by conditioning policies on user-specified preferences over objectives. This enables a single model to flexibly adapt to arbitrary trade-offs at run-time by producing a policy on or near the Pareto front. However, existing benchmarks for PCPL are largely restricted to toy tasks and fixed environments, limiting their realism and scalability. To address this gap, we introduce GraphAllocBench, a flexible benchmark built on a novel graph-based resource allocation sandbox environment inspired by city management, which we call CityPlannerEnv. GraphAllocBench provides a rich suite of problems with diverse objective functions, varying preference conditions, and high-dimensional scalability. We also propose two new evaluation metrics -- Proportion of Non-Dominated Solutions (PNDS) and Ordering Score (OS) -- that directly capture preference consistency while complementing the widely used hypervolume metric. Through experiments with Multi-Layer Perceptrons (MLPs) and graph-aware models, we show that GraphAllocBench exposes the limitations of existing MORL approaches and paves the way for using graph-based methods such as Graph Neural Networks (GNNs) in complex, high-dimensional combinatorial allocation tasks. Beyond its predefined problem set, GraphAllocBench enables users to flexibly vary objectives, preferences, and allocation rules, establishing it as a versatile and extensible benchmark for advancing PCPL. Code: https://github.com/jzh001/GraphAllocBench