🤖 AI Summary
To address the joint optimization challenge of energy constraints and charging infrastructure planning for long-distance electric vehicle (EV) travel, this paper proposes VEGA, an intelligent navigation agent. Methodologically, VEGA innovatively integrates Physics-Informed Neural Operators (PINO) with budget-guided Proximal Policy Optimization (PPO) reinforcement learning: PINO enables online inversion of vehicle dynamics parameters from onboard speed signals, supporting sensor-free, personalized energy consumption modeling; PPO jointly optimizes routing and charging decisions over a charging-station-annotated road network, while a state-of-charge-budgeted A* algorithm enhances search efficiency. Experiments demonstrate that VEGA achieves performance on par with Tesla Navigation for path planning, charging stop selection, and state-of-charge management on intercontinental routes (e.g., San Francisco to New York), and exhibits strong generalization—producing optimal solutions on unseen road networks in France and Japan.
📝 Abstract
Demands for software-defined vehicles (SDV) are rising and electric vehicles (EVs) are increasingly being equipped with powerful computers. This enables onboard AI systems to optimize charge-aware path optimization customized to reflect vehicle's current condition and environment. We present VEGA, a charge-aware EV navigation agent that plans over a charger-annotated road graph using Proximal Policy Optimization (PPO) with budgeted A* teacher-student guidance under state-of-charge (SoC) feasibility. VEGA consists of two modules. First, a physics-informed neural operator (PINO), trained on real vehicle speed and battery-power logs, uses recent vehicle speed logs to estimate aerodynamic drag, rolling resistance, mass, motor and regenerative-braking efficiencies, and auxiliary load by learning a vehicle-custom dynamics. Second, a Reinforcement Learning (RL) agent uses these dynamics to optimize a path with optimal charging stops and dwell times under SoC constraints. VEGA requires no additional sensors and uses only vehicle speed signals. It may serve as a virtual sensor for power and efficiency to potentially reduce EV cost. In evaluation on long routes like San Francisco to New York, VEGA's stops, dwell times, SoC management, and total travel time closely track Tesla Trip Planner while being slightly more conservative, presumably due to real vehicle conditions such as vehicle parameter drift due to deterioration. Although trained only in U.S. regions, VEGA was able to compute optimal charge-aware paths in France and Japan, demonstrating generalizability. It achieves practical integration of physics-informed learning and RL for EV eco-routing.