🤖 AI Summary
This paper studies monotone submodular maximization under matroid constraints. Addressing a long-standing bottleneck in the approximation ratio of deterministic algorithms—previously capped at 0.5008—it introduces the first deterministic non-blind local search algorithm achieving an approximation ratio of $1 - 1/e - varepsilon$. This bridges the theoretical gap between deterministic and randomized algorithms. The method fully exploits matroid structure to attain nearly linear query complexity $ ilde{O}_varepsilon(nr)$. By incorporating lightweight randomization, the complexity improves to $ ilde{O}_varepsilon(n + rsqrt{n})$. Notably, this is the first deterministic framework—retaining full determinism in its core design—to achieve the $1 - 1/e - varepsilon$ guarantee, significantly surpassing all prior deterministic approaches. The result advances the state-of-the-art both in approximation quality and computational efficiency for constrained submodular optimization.
📝 Abstract
We study the problem of maximizing a monotone submodular function subject to a matroid constraint, and present for it a deterministic non-oblivious local search algorithm that has an approximation guarantee of $1-1/e-epsilon$ (for any $epsilon > 0$) and query complexity of $ ilde{O}_{epsilon}(nr)$, where $n$ is the size of the ground set and $r$ is the rank of the matroid. Our algorithm vastly improves over the previous state-of-the-art 0.5008-approximation deterministic algorithm, and in fact, shows that there is no separation between the approximation guarantees that can be obtained by deterministic and randomized algorithms for the problem considered. The query complexity of our algorithm can be improved to $ ilde{O}_{epsilon}(n+hat{r}sqrt{{n}})$ using randomization, which is nearly-linear for $r=O(sqrt{n})$, and is always at least as good as the previous state-of-the-art algorithms.