๐ค AI Summary
To address the resource allocation challenges in vehicular multi-fog computing (MFC) arising from high vehicle mobility, heterogeneous resources, and dynamic workloads, this paper proposes a reinforcement learningโbased dynamic task scheduling framework. We formulate resource allocation as a Markov decision process and innovatively integrate Q-learning, deep Q-networks (DQN), and the Actor-Critic algorithm within a collaborative multi-fog node architecture to enable distributed intelligent scheduling. Compared with conventional optimization approaches, the proposed framework significantly reduces end-to-end latency (by 32.7% on average), improves task success rate (+18.4%), enhances load balancing, and strengthens system scalability and QoS guarantee capability. It establishes an efficient, adaptive resource management paradigm tailored for dynamic vehicular MFC environments.
๐ Abstract
The exponential growth of Internet of Things (IoT) devices, smart vehicles, and latency-sensitive applications has created an urgent demand for efficient distributed computing paradigms. Multi-Fog Computing (MFC), as an extension of fog and edge computing, deploys multiple fog nodes near end users to reduce latency, enhance scalability, and ensure Quality of Service (QoS). However, resource allocation in MFC environments is highly challenging due to dynamic vehicular mobility, heterogeneous resources, and fluctuating workloads. Traditional optimization-based methods often fail to adapt to such dynamics. Reinforcement Learning (RL), as a model-free decision-making framework, enables adaptive task allocation by continuously interacting with the environment. This paper formulates the resource allocation problem in MFC as a Markov Decision Process (MDP) and investigates the application of RL algorithms such as Q-learning, Deep Q-Networks (DQN), and Actor-Critic. We present experimental results demonstrating improvements in latency, workload balance, and task success rate. The contributions and novelty of this study are also discussed, highlighting the role of RL in addressing emerging vehicular computing challenges.