🤖 AI Summary
This work addresses the challenge of directly optimizing nonlinear welfare criteria—such as Nash social welfare and max-min fairness—in offline multi-objective reinforcement learning (MORL), where conventional methods struggle due to the absence of online interaction. We propose the first unified framework enabling end-to-end optimization of nonlinear fairness objectives in the offline setting. Grounded in Distributionally Corrected Estimation (DICE), our approach seamlessly incorporates nonlinear welfare functions into both policy evaluation and optimization, eliminating the need for predefined weight tuning or exhaustive Pareto-front enumeration. Crucially, it jointly optimizes aggregate welfare and distributional robustness under dataset constraints. Empirical evaluation across standard offline MORL benchmarks demonstrates significant improvements in fairness, stability, and sample efficiency. To our knowledge, this is the first method achieving direct optimization of nonlinear social welfare from a fixed, static dataset.
📝 Abstract
Multi-objective reinforcement learning (MORL) aims to optimize policies in the presence of conflicting objectives, where linear scalarization is commonly used to reduce vector-valued returns into scalar signals. While effective for certain preferences, this approach cannot capture fairness-oriented goals such as Nash social welfare or max-min fairness, which require nonlinear and non-additive trade-offs. Although several online algorithms have been proposed for specific fairness objectives, a unified approach for optimizing nonlinear welfare criteria in the offline setting-where learning must proceed from a fixed dataset-remains unexplored. In this work, we present FairDICE, the first offline MORL framework that directly optimizes nonlinear welfare objective. FairDICE leverages distribution correction estimation to jointly account for welfare maximization and distributional regularization, enabling stable and sample-efficient learning without requiring explicit preference weights or exhaustive weight search. Across multiple offline benchmarks, FairDICE demonstrates strong fairness-aware performance compared to existing baselines.