Near-Optimal Sparsifiers for Stochastic Knapsack and Assignment Problems

📅 2025-11-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses efficient probing of critical data points under uncertainty and high-cost information acquisition, subject to knapsack-type constraints. Conventional cardinality-based sparsification methods fail when the feasible set undergoes drastic structural changes. Method: We propose a “polyhedral sparsification” framework, introducing for the first time a sparsity measure grounded in feasibility polytope embedding. Our approach leverages polyhedral embedding analysis, weight grouping, and compensation arguments to precisely characterize and control query redundancy. Contribution/Results: We establish that stochastic multiple-knapsack and generalized assignment problems admit dimension-independent (1−ε)-approximate sparsifiers—dependent only on activation probabilities and accuracy ε—with query complexity poly(1/p, 1/ε), independent of problem size. This yields the first efficient, theoretically guaranteed sparsification scheme for several APX-hard problems.

Technology Category

Application Category

📝 Abstract
When uncertainty meets costly information gathering, a fundamental question emerges: which data points should we probe to unlock near-optimal solutions? Sparsification of stochastic packing problems addresses this trade-off. The existing notions of sparsification measure the level of sparsity, called degree, as the ratio of queried items to the optimal solution size. While effective for matching and matroid-type problems with uniform structures, this cardinality-based approach fails for knapsack-type constraints where feasible sets exhibit dramatic structural variation. We introduce a polyhedral sparsification framework that measures the degree as the smallest scalar needed to embed the query set within a scaled feasibility polytope, naturally capturing redundancy without relying on cardinality. Our main contribution establishes that knapsack, multiple knapsack, and generalized assignment problems admit (1 - epsilon)-approximate sparsifiers with degree polynomial in 1/p and 1/epsilon -- where p denotes the independent activation probability of each element -- remarkably independent of problem dimensions. The key insight involves grouping items with similar weights and deploying a charging argument: when our query set misses an optimal item, we either substitute it with a queried item from the same group or leverage that group's excess contribution to compensate for the loss. This reveals an intriguing complexity-theoretic separation -- while the multiple knapsack problem lacks an FPTAS and generalized assignment is APX-hard, their sparsification counterparts admit efficient (1 - epsilon)-approximation algorithms that identify polynomial-degree query sets. Finally, we raise an open question: can such sparsification extend to general integer linear programs with degree independent of problem dimensions?
Problem

Research questions and friction points this paper is trying to address.

Develop sparsifiers for stochastic knapsack and assignment problems
Measure sparsity via polyhedral embedding instead of cardinality
Achieve approximation with degree independent of problem dimensions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Polyhedral sparsification framework using scaled feasibility polytopes
Grouping items by similar weights with charging argument
Polynomial-degree query sets independent of problem dimensions
🔎 Similar Papers
No similar papers found.