Dynamic Operating System Scheduling Using Double DQN: A Reinforcement Learning Approach to Task Optimization

📅 2025-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional OS scheduling policies struggle to adapt to dynamic workloads and heterogeneous task types. To address this, this paper introduces Double Deep Q-Networks (Double DQN) to operating-system-level online scheduling for the first time, proposing a state-aware reinforcement learning scheduler. The method dynamically models real-time system state—including CPU utilization, I/O activity, and queue characteristics—and jointly optimizes task prioritization and resource allocation via online training and deployment. Experiments demonstrate that under light, medium, and heavy loads, the approach reduces average task completion time and response latency by 21.6%–34.8%, with particularly pronounced acceleration for I/O-intensive tasks; it also improves resource utilization by 19.3%, effectively mitigating both waste and overload. This work breaks from static scheduling paradigms and establishes a scalable, adaptive scheduling framework applicable to cloud and distributed systems.

Technology Category

Application Category

📝 Abstract
In this paper, an operating system scheduling algorithm based on Double DQN (Double Deep Q network) is proposed, and its performance under different task types and system loads is verified by experiments. Compared with the traditional scheduling algorithm, the algorithm based on Double DQN can dynamically adjust the task priority and resource allocation strategy, thus improving the task completion efficiency, system throughput, and response speed. The experimental results show that the Double DQN algorithm has high scheduling performance under light load, medium load and heavy load scenarios, especially when dealing with I/O intensive tasks, and can effectively reduce task completion time and system response time. In addition, the algorithm also shows high optimization ability in resource utilization and can intelligently adjust resource allocation according to the system state, avoiding resource waste and excessive load. Future studies will further explore the application of the algorithm in more complex systems, especially scheduling optimization in cloud computing and large-scale distributed environments, combining factors such as network latency and energy efficiency to improve the overall performance and adaptability of the algorithm.
Problem

Research questions and friction points this paper is trying to address.

Dynamic task priority adjustment for efficiency
Optimizing resource allocation in varying system loads
Reducing task completion time in I/O intensive scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Double DQN for dynamic task priority adjustment
Optimizes resource allocation using reinforcement learning
Enhances performance across varying system loads
🔎 Similar Papers
No similar papers found.
Xiaoxuan Sun
Xiaoxuan Sun
University of Southern California
Y
Yifei Duan
University of Pennsylvania, Philadelphia, USA
Y
Yingnan Deng
Georgia Institute of Technology, Atlanta, USA
Fan Guo
Fan Guo
Los Alamos National Laboratory
Particle accelerationMagnetic ReconnectionCosmic raysPlasma AstrophysicsSpace Physics
G
Guohui Cai
Illinois Institute of Technology, Chicago, USA
Yuting Peng
Yuting Peng
New York University / Shandong University
Computer Science