🤖 AI Summary
In digital twin-enabled vehicle-infrastructure cooperative systems, high vehicle mobility intensifies resource contention among edge servers, posing a critical challenge in jointly scheduling twin model synchronization and real-time computational tasks under strict latency constraints.
Method: This paper formulates a dual-latency-aware (twin synchronization latency + task processing latency) resource utility maximization optimization model—first unifying both latency components—and proposes MADRL-CSTC, a multi-agent deep reinforcement learning–based collaborative scheduling framework. It integrates a multi-agent Markov decision process, an improved deep Q-network, and a satisfaction-driven state synchronization mechanism to enable dynamic, distributed resource coordination.
Results: Simulation results demonstrate that MADRL-CSTC reduces average end-to-end latency by 23.6% and improves resource utility by 19.4% over baseline algorithms, significantly enhancing system responsiveness and stability under highly dynamic vehicular environments.
📝 Abstract
As a promising technology, vehicular edge computing (VEC) can provide computing and caching services by deploying VEC servers near vehicles. However, VEC networks still face challenges such as high vehicle mobility. Digital twin (DT), an emerging technology, can predict, estimate, and analyze real-time states by digitally modeling objects in the physical world. By integrating DT with VEC, a virtual vehicle DT can be created in the VEC server to monitor the real-time operating status of vehicles. However, maintaining the vehicle DT model requires ongoing attention from the VEC server, which also needs to offer computing services for the vehicles. Therefore, effective allocation and scheduling of VEC server resources are crucial. This study focuses on a general VEC network with a single VEC service and multiple vehicles, examining the two types of delays caused by twin maintenance and computational processing within the network. By transforming the problem using satisfaction functions, we propose an optimization problem aimed at maximizing each vehicle's resource utility to determine the optimal resource allocation strategy. Given the non-convex nature of the issue, we employ multi-agent Markov decision processes to reformulate the problem. Subsequently, we propose the twin maintenance and computing task processing resource collaborative scheduling (MADRL-CSTC) algorithm, which leverages multi-agent deep reinforcement learning. Through experimental comparisons with alternative algorithms, it demonstrates that our proposed approach is effective in terms of resource allocation.