A Curriculum-Based Deep Reinforcement Learning Framework for the Electric Vehicle Routing Problem

πŸ“… 2026-01-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenges of training instability and poor generalization in deep reinforcement learning for the Electric Vehicle Routing Problem with Time Windows (EVRPTW) by proposing a three-stage curriculum learning framework that progressively increases problem complexity. The approach first optimizes routing and fleet size, then incorporates battery management, and finally solves the full EVRPTW. It integrates an enhanced Proximal Policy Optimization (PPO) algorithm with a heterogeneous graph attention network and a global-local attention mechanism, complemented by stage-specific hyperparameters and an adaptive learning rate scheduler. Trained solely on small-scale instances with 10 customers, the method efficiently generalizes to problem sizes ranging from 5 to 100 customers, significantly outperforming existing baselines on medium-scale instances while achieving high solution feasibility and quality.

Technology Category

Application Category

πŸ“ Abstract
The electric vehicle routing problem with time windows (EVRPTW) is a complex optimization problem in sustainable logistics, where routing decisions must minimize total travel distance, fleet size, and battery usage while satisfying strict customer time constraints. Although deep reinforcement learning (DRL) has shown great potential as an alternative to classical heuristics and exact solvers, existing DRL models often struggle to maintain training stability-failing to converge or generalize when constraints are dense. In this study, we propose a curriculum-based deep reinforcement learning (CB-DRL) framework designed to resolve this instability. The framework utilizes a structured three-phase curriculum that gradually increases problem complexity: the agent first learns distance and fleet optimization (Phase A), then battery management (Phase B), and finally the full EVRPTW (Phase C). To ensure stable learning across phases, the framework employs a modified proximal policy optimization algorithm with phase-specific hyperparameters, value and advantage clipping, and adaptive learning-rate scheduling. The policy network is built upon a heterogeneous graph attention encoder enhanced by global-local attention and feature-wise linear modulation. This specialized architecture explicitly captures the distinct properties of depots, customers, and charging stations. Trained exclusively on small instances with N=10 customers, the model demonstrates robust generalization to unseen instances ranging from N=5 to N=100, significantly outperforming standard baselines on medium-scale problems. Experimental results confirm that this curriculum-guided approach achieves high feasibility rates and competitive solution quality on out-of-distribution instances where standard DRL baselines fail, effectively bridging the gap between neural speed and operational reliability.
Problem

Research questions and friction points this paper is trying to address.

Electric Vehicle Routing Problem
Time Windows
Deep Reinforcement Learning
Training Stability
Generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Curriculum-based Deep Reinforcement Learning
Electric Vehicle Routing Problem with Time Windows
Heterogeneous Graph Attention Network
Proximal Policy Optimization
Generalization in Combinatorial Optimization
πŸ”Ž Similar Papers
No similar papers found.
M
Mertcan Daysalilar
Industrial and Systems Engineering, University of Miami, Coral Gables, FL, USA
F
Fuat Uyguroğlu
Faculty of Engineering, Cyprus International University 99258 Nicosia, North Cyprus, via Mersin 10, Turkey
G
Gabriel Nicolosi
Engineering Management and Systems Engineering, Missouri University of Science and Technology, Rolla, MO, USA
Adam Meyers
Adam Meyers
Associate Clinical Professor, New York University
Natural Language Processing