Discrete Effort Distribution via Regrettable Greedy Algorithm

📅 2025-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the maximization of a separable objective function ∑R_j(x_j) subject to a single linear constraint ∑x_j = k, where x_j ∈ {0,…,m}. To overcome the O(n²m²) time complexity bottleneck of classical dynamic programming (DP), we propose the first *reversible greedy algorithm*, enabling batch computation of optimal integer solutions for all k ∈ [0, nm] in O(n log n) time. Our method leverages sorted discrete marginal gains and dynamically adjusts regret values to guide assignment decisions. When m is constant, the algorithm achieves the theoretically optimal time complexity. Experimental results demonstrate substantial speedups over standard DP—often by several orders of magnitude—while preserving solution optimality. The approach bridges theoretical efficiency and practical applicability, offering both rigorous guarantees and scalable performance for large-scale instances.

Technology Category

Application Category

📝 Abstract
This paper addresses resource allocation problem with a separable objective function under a single linear constraint, formulated as maximizing $sum_{j=1}^{n}R_j(x_j)$ subject to $sum_{j=1}^{n}x_j=k$ and $x_jin{0,dots,m}$. While classical dynamic programming approach solves this problem in $O(n^2m^2)$ time, we propose a regrettable greedy algorithm that achieves $O(nlog n)$ time when $m=O(1)$. The algorithm significantly outperforms traditional dynamic programming for small $m$. Our algorithm actually solves the problem for all $k~(0leq kleq nm)$ in the mentioned time.
Problem

Research questions and friction points this paper is trying to address.

Optimizing resource allocation with separable objectives
Reducing time complexity for discrete effort distribution
Solving constrained maximization via efficient greedy algorithm
Innovation

Methods, ideas, or system contributions that make the work stand out.

Regrettable greedy algorithm for resource allocation
Achieves O(n log n) time complexity
Solves problem for all k in linear constraint
🔎 Similar Papers
No similar papers found.
Song Cao
Song Cao
University of Southern California
Computer Vision
T
Taikun Zhu
Shenzhen Campus of Sun Yat-sen University, Shenzhen, Guangdong, China
K
Kai Jin
Shenzhen Campus of Sun Yat-sen University, Shenzhen, Guangdong, China