Benchmarking MOEAs for solving continuous multi-objective RL problems

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the effectiveness and limitations of multi-objective evolutionary algorithms (MOEAs) in solving continuous multi-objective reinforcement learning (MORL) problems. Addressing MORL-specific challenges—including objective conflict and continuous state spaces—we propose the first complexity characterization framework for MORL instances, construct a standardized benchmark task suite, and pioneer the systematic use of MORL as a benchmark platform for MOEAs. We empirically compare canonical MOEAs (e.g., NSGA-II, MOEA/D) against scalarized single-objective evolutionary methods, validating the applicability and statistical robustness of quality indicators—particularly hypervolume—in MORL. Results demonstrate that MOEAs significantly outperform scalarization-based approaches in both convergence to the Pareto front and diversity of solutions along it; moreover, objective conflict intensity is identified as a key determinant of MOEA performance. We open-source the benchmark framework and analysis report, establishing a new paradigm for interdisciplinary research at the intersection of MORL and MOEAs.

Technology Category

Application Category

📝 Abstract
Multi-objective reinforcement learning (MORL) addresses the challenge of simultaneously optimizing multiple, often conflicting, rewards, moving beyond the single-reward focus of conventional reinforcement learning (RL). This approach is essential for applications where agents must balance trade-offs between diverse goals, such as speed, energy efficiency, or stability, as a series of sequential decisions. This paper investigates the applicability and limitations of multi-objective evolutionary algorithms (MOEAs) in solving complex MORL problems. We assess whether these algorithms can effectively address the unique challenges posed by MORL and how MORL instances can serve as benchmarks to evaluate and improve MOEA performance. In particular, we propose a framework to characterize the features influencing MORL instance complexity, select representative MORL problems from the literature, and benchmark a suite of MOEAs alongside single-objective EAs using scalarized MORL formulations. Additionally, we evaluate the utility of existing multi-objective quality indicators in MORL scenarios, such as hypervolume conducting a comparison of the algorithms supported by statistical analysis. Our findings provide insights into the interplay between MORL problem characteristics and algorithmic effectiveness, highlighting opportunities for advancing both MORL research and the design of evolutionary algorithms.
Problem

Research questions and friction points this paper is trying to address.

Evaluating MOEAs for multi-objective RL challenges
Assessing MOEA effectiveness in complex MORL scenarios
Benchmarking MOEAs with multi-objective quality indicators
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes framework to characterize MORL complexity
Benchmarks MOEAs with scalarized MORL formulations
Evaluates multi-objective quality indicators statistically
🔎 Similar Papers
No similar papers found.
C
Carlos Hern'andez
Applied Mathematics and Systems Research Institute. National Autonomous University of Mexico. Mexico City. Mexico
Roberto Santana
Roberto Santana
Intelligent Systems Group ISG, University of the Basque Country UPV/EHU
Estimation of Distribution AlgorithmsEvolutionary ComputationProbabilistic Graphical ModelsMachine Learning