🤖 AI Summary
To address the low efficiency of Pareto front approximation in multi-objective reinforcement learning (MORL), this paper proposes a UCB-based adaptive search method for linear utility function weight vectors. Unlike fixed or random weight sampling, our approach models the weight space as a noisy reward function, using hypervolume gain as feedback to dynamically concentrate exploration on high-potential weight regions. This is the first theoretically grounded application of the UCB strategy to utility function space for weight scheduling. The method is fully compatible with existing multi-objective policy optimization frameworks—e.g., MOPPO—without requiring modifications to underlying algorithms. Evaluated on MuJoCo benchmark tasks, it achieves an average 12.7% improvement in hypervolume across random seeds, significantly enhancing both Pareto front coverage and convergence speed. The implementation is publicly available.
📝 Abstract
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours that trade-off between multiple, possibly conflicting, objectives. MORL based on decomposition is a family of solution methods that employ a number of utility functions to decompose the multi-objective problem into individual single-objective problems solved simultaneously in order to approximate a Pareto front of policies. We focus on the case of linear utility functions parameterised by weight vectors w. We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process, with the aim of maximising the hypervolume of the resulting Pareto front. The proposed method is shown to outperform various MORL baselines on Mujoco benchmark problems across different random seeds. The code is online at: https://github.com/SYCAMORE-1/ucb-MOPPO.