Improving Pareto Set Learning for Expensive Multi-objective Optimization via Stein Variational Hypernetworks

📅 2024-12-23
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
For computationally expensive multi-objective optimization (MOO) problems, existing surrogate-assisted methods often suffer from fragmented modeling, leading to spurious local optima and inefficient learning of the global Pareto set. This paper proposes a novel kernelized particle collaborative optimization framework that integrates Stein variational gradient descent (SVGD) with hypernetworks. Specifically, inter-particle kernel functions enable smooth and robust collaborative exploration, while a hypernetwork establishes a unified mapping from reference vectors to true Pareto-optimal solutions. The method significantly improves both convergence and diversity of the Pareto set. Empirical evaluation on synthetic and real-world MOO benchmarks demonstrates superior performance over state-of-the-art surrogate-based optimizers—effectively avoiding spurious optima, enhancing global exploration capability, and yielding higher-accuracy Pareto solution sets.

Technology Category

Application Category

📝 Abstract
Expensive multi-objective optimization problems (EMOPs) are common in real-world scenarios where evaluating objective functions is costly and involves extensive computations or physical experiments. Current Pareto set learning methods for such problems often rely on surrogate models like Gaussian processes to approximate the objective functions. These surrogate models can become fragmented, resulting in numerous small uncertain regions between explored solutions. When using acquisition functions such as the Lower Confidence Bound (LCB), these uncertain regions can turn into pseudo-local optima, complicating the search for globally optimal solutions. To address these challenges, we propose a novel approach called SVH-PSL, which integrates Stein Variational Gradient Descent (SVGD) with Hypernetworks for efficient Pareto set learning. Our method addresses the issues of fragmented surrogate models and pseudo-local optima by collectively moving particles in a manner that smooths out the solution space. The particles interact with each other through a kernel function, which helps maintain diversity and encourages the exploration of underexplored regions. This kernel-based interaction prevents particles from clustering around pseudo-local optima and promotes convergence towards globally optimal solutions. Our approach aims to establish robust relationships between trade-off reference vectors and their corresponding true Pareto solutions, overcoming the limitations of existing methods. Through extensive experiments across both synthetic and real-world MOO benchmarks, we demonstrate that SVH-PSL significantly improves the quality of the learned Pareto set, offering a promising solution for expensive multi-objective optimization problems.
Problem

Research questions and friction points this paper is trying to address.

Multi-objective Optimization
Pareto Optimal Solutions
Expensive Function Evaluations
Innovation

Methods, ideas, or system contributions that make the work stand out.

SVH-PSL
Multi-objective Optimization
Pareto Optimal Solutions
🔎 Similar Papers
Minh-Duc Nguyen
Minh-Duc Nguyen
CECS, VinUniversity
AI AgentLLMOptimization
P
Phuong Mai Dinh
VinUni-Illinois Smart Health Center, VinUniversity, Vietnam; College of Engineering and Computer Science, VinUniversity, Vietnam
Quang-Huy Nguyen
Quang-Huy Nguyen
Undergraduate Student, VNU University of Engineering and Technology
Recommender SystemsTrusthworthy AILarge Language Models
L
Long P. Hoang
Information Systems Technology and Design, Singapore University of Technology and Design, Singapore
D
Dung D. Le
College of Engineering and Computer Science, VinUniversity, Vietnam