Graph-Supported Dynamic Algorithm Configuration for Multi-Objective Combinatorial Optimization

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep reinforcement learning (DRL) approaches lack effectiveness for dynamic algorithm configuration in multi-objective combinatorial optimization (MOCO). Method: This paper proposes the first DRL framework integrating graph neural networks (GNNs) for dynamic parameter configuration in multi-objective evolutionary algorithms (MOEAs). It models the evolution of solution sets in objective space as a dynamic graph, employs GNNs to extract topology-aware state representations, and formulates adaptive decision-making as a Markov decision process (MDP). Contribution/Results: This work pioneers GNN-based state representation for MOEA dynamic configuration, achieving strong generalization across objective dimensions (2–5), problem scales (100–1,000 variables), and algorithmic frameworks. Evaluated on standard MOCO benchmarks, it improves both convergence and diversity metrics by an average of 12.7% over conventional parameter tuning and state-of-the-art DRL methods.

Technology Category

Application Category

📝 Abstract
Deep reinforcement learning (DRL) has been widely used for dynamic algorithm configuration, particularly in evolutionary computation, which benefits from the adaptive update of parameters during the algorithmic execution. However, applying DRL to algorithm configuration for multi-objective combinatorial optimization (MOCO) problems remains relatively unexplored. This paper presents a novel graph neural network (GNN) based DRL to configure multi-objective evolutionary algorithms. We model the dynamic algorithm configuration as a Markov decision process, representing the convergence of solutions in the objective space by a graph, with their embeddings learned by a GNN to enhance the state representation. Experiments on diverse MOCO challenges indicate that our method outperforms traditional and DRL-based algorithm configuration methods in terms of efficacy and adaptability. It also exhibits advantageous generalizability across objective types and problem sizes, and applicability to different evolutionary computation methods.
Problem

Research questions and friction points this paper is trying to address.

Dynamic algorithm configuration for multi-objective combinatorial optimization
Enhancing state representation using GNN for evolutionary algorithms
Improving efficacy and adaptability in multi-objective evolutionary computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph neural network enhances state representation
Markov decision process models dynamic configuration
Method outperforms traditional DRL-based approaches
🔎 Similar Papers
No similar papers found.