🤖 AI Summary
This study addresses the challenges of dynamic resource allocation and prediction in three-tier vehicular fog computing by proposing a Q-learning-based adaptive resource management approach. The method operates without prior knowledge, leveraging reinforcement learning to learn from historical interactions and dynamically optimize the allocation of memory, bandwidth, and processing resources in real time. As the first work to apply Q-learning within a three-tier vehicular fog computing architecture, the proposed scheme significantly reduces average task processing time and resource consumption. It outperforms existing baseline methods while meeting system performance requirements, thereby enhancing overall system responsiveness and resource utilization efficiency.
📝 Abstract
In this paper, a method for predicting the resources required for an intelligent vehicle client using a three-layer vehicular computing architecture is proposed. This method leverages Q-Learning to optimize resource allocation and enhance overall system performance. This approach employs reinforcement learning capabilities to provide a dynamic and adaptive strategy for resource management in a fog computing environment. The key findings of this study indicate that Q-learning can effectively predict the appropriate allocation of resources by learning from past experiences and making informed decisions. Through continuous training and updating of the Q-learning agent, the system can adapt to changing conditions and make resource allocation decisions based on real-time information. The experimental results demonstrate the effectiveness of the proposed method in optimizing resource allocation. The Q-learning agent predicts the optimal values for memory, bandwidth, and processor. These predictions not only minimize resource consumption but also meet the performance requirements of the fog system. Implementations show that this method improves the average task processing time in compared to other methods evaluated in this study