🤖 AI Summary
To address task overload, high packet drop rates, excessive latency, and low reliability caused by resource constraints on user equipment (UE) in 5G mobile edge computing (MEC), this paper proposes a task partitioning and collaborative execution mechanism. It is the first to jointly model task partitioning decisions and 5G New Radio (NR) radio resource block (RB) allocation—departing from conventional all-offload or all-local paradigms—to simultaneously achieve zero task dropping, low end-to-end latency, and high task completion rate. A mixed-integer linear programming (MILP) formulation is developed for exact optimization, and a cuckoo search-based heuristic algorithm incorporating RB constraints is designed for scalable deployment. Experimental results demonstrate that, compared to the full-offloading baseline, the MILP solution reduces end-to-end latency by 24%, while the heuristic reduces it by 18%; both achieve a 100% task completion rate. Moreover, the proposed approach significantly improves load balancing and service reliability.
📝 Abstract
The demand for MEC has increased with the rise of data-intensive applications and 5G networks, while conventional cloud models struggle to satisfy low-latency requirements. While task offloading is crucial for minimizing latency on resource-constrained User Equipment (UE), fully offloading of all tasks to MEC servers may result in overload and possible task drops. Overlooking the effect of number of dropped tasks can significantly undermine system efficiency, as each dropped task results in unfulfilled service demands and reduced reliability, directly impacting user experience and overall network performance. In this paper, we employ task partitioning, enabling partitions of task to be processed locally while assigning the rest to MEC, thus balancing the load and ensuring no task drops. This methodology enhances efficiency via Mixed Integer Linear Programming (MILP) and Cuckoo Search, resulting in effective task assignment and minimum latency. Moreover, we ensure each user's RB allocation stays within the maximum limit while keeping latency low. Experimental results indicate that this strategy surpasses both full offloading and full local processing, providing significant improvements in latency and task completion rates across diverse number of users. In our scenario, MILP task partitioning results in 24% reduction in latency compared to MILP task offloading for the maximum number of users, whereas Cuckoo search task partitioning yields 18% latency reduction in comparison with Cuckoo search task offloading.