Optimizing Data Transfer Performance and Energy Efficiency with Deep Reinforcement Learning

📅 2025-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the joint optimization of performance and energy efficiency for end-to-end data transmission in shared networks, this paper proposes a fairness-aware dynamic multi-parameter deep reinforcement learning framework that adaptively regulates transport-layer parameters in real time at the application layer. The method introduces a novel dual-objective reward function—combining weighted throughput and energy consumption with explicit fairness constraints—and enables intelligent activation/deactivation of transmission threads to mitigate congestion and reduce energy usage. It integrates proximal policy optimization (PPO) and advantage actor-critic (A2C) algorithms, dynamic thread scheduling, and multi-dimensional state perception. Experimental results demonstrate that, compared to baseline approaches, the proposed framework achieves up to a 25% improvement in throughput and up to a 40% reduction in terminal energy consumption, while simultaneously ensuring high throughput, low power consumption, and fair resource allocation.

Technology Category

Application Category

📝 Abstract
The rapid growth of data across fields of science and industry has increased the need to improve the performance of end-to-end data transfers while using the resources more efficiently. In this paper, we present a dynamic, multiparameter reinforcement learning (RL) framework that adjusts application-layer transfer settings during data transfers on shared networks. Our method strikes a balance between high throughput and low energy utilization by employing reward signals that focus on both energy efficiency and fairness. The RL agents can pause and resume transfer threads as needed, pausing during heavy network use and resuming when resources are available, to prevent overload and save energy. We evaluate several RL techniques and compare our solution with state-of-the-art methods by measuring computational overhead, adaptability, throughput, and energy consumption. Our experiments show up to 25% increase in throughput and up to 40% reduction in energy usage at the end systems compared to baseline methods, highlighting a fair and energy-efficient way to optimize data transfers in shared network environments.
Problem

Research questions and friction points this paper is trying to address.

Optimize data transfer performance and energy efficiency
Dynamic RL framework adjusts transfer settings on shared networks
Balance throughput and energy usage with adaptive RL agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic multiparameter reinforcement learning framework
Balances throughput and energy efficiency
Pauses and resumes transfer threads adaptively
🔎 Similar Papers
No similar papers found.
H
Hasubil Jamil
Department of Computer Science and Engineering, University at Buffalo, Buffalo, NY
J
Jacob Goldverg
Department of Computer Science and Engineering, University at Buffalo, Buffalo, NY
E
Elvis Rodrigues
Department of Computer Science and Engineering, University at Buffalo, Buffalo, NY
M
M. S. Q. Z. Nine
Department of Computer Science, Tennessee Tech University, Cookeville, TN
Tevfik Kosar
Tevfik Kosar
Professor, University at Buffalo (SUNY)
Distributed systemsgreen and sustainable computingAI/ML for systems