π€ AI Summary
This work addresses the challenge of integrating multi-source human preference feedback in offline reinforcement learning while maximizing overall utility under minimum welfare constraints for specific groups. The authors propose a constrained preference learning framework based on pairwise comparison data, which estimates reward functions corresponding to each preference source via maximum likelihood. By reformulating the constrained optimization problem into a KL-regularized Lagrangian dual form, the method adopts a Gibbs policy as the primal solution and optimizes only the dual variables for computational efficiency. This approach provides the first finite-sample performance guarantees for offline constrained preference learning, accommodates multiple constraints and general f-divergence regularizers, and satisfies constraints with high probability. Theoretical analysis establishes the algorithmβs convergence and safety, while experiments demonstrate its effective trade-off between fairness and performance.
π Abstract
We study offline constrained reinforcement learning from human feedback with multiple preference oracles. Motivated by applications that trade off performance with safety or fairness, we aim to maximize target population utility subject to a minimum protected group welfare constraint. From pairwise comparisons collected under a reference policy, we estimate oracle-specific rewards via maximum likelihood and analyze how statistical uncertainty propagates through the dual program. We cast the constrained objective as a KL-regularized Lagrangian whose primal optimizer is a Gibbs policy, reducing learning to a convex dual problem. We propose a dual-only algorithm that ensures high-probability constraint satisfaction and provide the first finite-sample performance guarantees for offline constrained preference learning. Finally, we extend our theoretical analysis to accommodate multiple constraints and general f-divergence regularization.