🤖 AI Summary
Selecting jointly relevant and diverse subsets from large-scale data remains challenging in recommender systems and retrieval-augmented generation (RAG). Existing methods—such as Maximal Marginal Relevance (MMR) and Determinantal Greedy Diverse Selection (DGDS)—suffer from scalability limitations and degraded quality under distributed settings.
Method: This paper proposes the first scalable, distributed, multi-level optimization framework for joint relevance-diversity subset selection. It introduces a novel structured multi-level subset selection paradigm, integrating hierarchical graph partitioning, distributed greedy approximation, and theoretically grounded constant-factor approximation guarantees.
Contribution/Results: The framework achieves up to 4.5–20× speedup in recommendation tasks without accuracy loss, and improves RAG question-answering accuracy by up to 6 percentage points. It significantly outperforms state-of-the-art distributed subset selection approaches in both efficiency and effectiveness.
📝 Abstract
The problem of relevant and diverse subset selection has a wide range of applications, including recommender systems and retrieval-augmented generation (RAG). For example, in recommender systems, one is interested in selecting relevant items, while providing a diversified recommendation. Constrained subset selection problem is NP-hard, and popular approaches such as Maximum Marginal Relevance (MMR) are based on greedy selection. Many real-world applications involve large data, but the original MMR work did not consider distributed selection. This limitation was later addressed by a method called DGDS which allows for a distributed setting using random data partitioning. Here, we exploit structure in the data to further improve both scalability and performance on the target application. We propose MUSS, a novel method that uses a multilevel approach to relevant and diverse selection. We provide a rigorous theoretical analysis and show that our method achieves a constant factor approximation of the optimal objective. In a recommender system application, our method can achieve the same level of performance as baselines, but 4.5 to 20 times faster. Our method is also capable of outperforming baselines by up to 6 percent points of RAG-based question answering accuracy.