On Distributed Larger-Than-Memory Subset Selection With Pairwise Submodular Functions

📅 2024-02-26
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Submodular subset selection for ultra-large-scale datasets (e.g., tens of billions of samples) faces two key challenges: (i) the target subset exceeds main-memory capacity, and (ii) no centralized node is available to coordinate or store the selected subset. Method: We propose the first decentralized, external-memory-friendly distributed submodular optimization framework. It integrates distributed boundary pruning with multi-round sharded greedy selection, eliminating reliance on a central machine for storing the final subset while providing theoretical approximation ratio guarantees. Contribution/Results: Our method matches the performance of centralized algorithms on CIFAR-100 and ImageNet. It scales successfully to a 13-billion-sample dataset, incurring negligible marginal accuracy loss and significantly reducing training cost—demonstrating both theoretical soundness and practical scalability for massive decentralized data regimes.

Technology Category

Application Category

📝 Abstract
Many learning problems hinge on the fundamental problem of subset selection, i.e., identifying a subset of important and representative points. For example, selecting the most significant samples in ML training cannot only reduce training costs but also enhance model quality. Submodularity, a discrete analogue of convexity, is commonly used for solving subset selection problems. However, existing algorithms for optimizing submodular functions are sequential, and the prior distributed methods require at least one central machine to fit the target subset. In this paper, we relax the requirement of having a central machine for the target subset by proposing a novel distributed bounding algorithm with provable approximation guarantees. The algorithm iteratively bounds the minimum and maximum utility values to select high quality points and discard the unimportant ones. When bounding does not find the complete subset, we use a multi-round, partition-based distributed greedy algorithm to identify the remaining subset. We show that these algorithms find high quality subsets on CIFAR-100 and ImageNet with marginal or no loss in quality compared to centralized methods, and scale to a dataset with 13 billion points.
Problem

Research questions and friction points this paper is trying to address.

Distributed subset selection for large datasets
Overcoming memory limitations in submodular optimization
Scalable algorithms for billion-scale data processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed bounding algorithm for subset selection
Multi-round partition-based distributed greedy algorithm
Scalable to datasets with billions of points
🔎 Similar Papers
No similar papers found.