BANG: Billion-Scale Approximate Nearest Neighbor Search using a Single GPU

📅 2024-01-20
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing GPU-based approximate nearest neighbor search (ANNS) solutions for billion-scale high-dimensional data are constrained by single-GPU memory capacity and suffer from PCIe bandwidth bottlenecks when employing data sharding. This paper proposes the first CPU–GPU co-processing architecture: a graph index resides in host memory with memory-mapped access, while compressed and quantized vector data reside on GPU memory for distance computation and graph traversal. The design integrates customized GPU kernels, CPU–GPU asynchronous pipelining, and cache-aware optimizations to enable end-to-end full-graph retrieval. Evaluated on a single NVIDIA A100 GPU, our method achieves 30–200× higher throughput than state-of-the-art approaches at 90% recall on billion-scale datasets, effectively overcoming both single-GPU memory and interconnect bandwidth limitations.

Technology Category

Application Category

📝 Abstract
Approximate Nearest Neighbour Search (ANNS) is a subroutine in algorithms routinely employed in information retrieval, data mining, image processing, and beyond. Recent works have established that graph-based ANNS algorithms are practically more efficient than the other methods proposed in the literature. The growing volume and dimensionality of data necessitates designing scalable techniques for ANNS. To this end, the prior art has explored parallelizing graph-based ANNS on GPU leveraging its massive parallelism. The current state-of-the-art GPU-based ANNS algorithms either (i) require both the dataset and the generated graph index to reside entirely in the GPU memory, or (ii) they partition the dataset into small independent shards, each of which can fit in GPU memory, and perform the search on these shards on the GPU. While the first approach fails to handle large datasets due to the limited memory available on the GPU, the latter delivers poor performance on large datasets due to high data traffic over the low-bandwidth PCIe bus. We introduce BANG, a first-of-its-kind technique for graph-based ANNS on GPU for billion-scale datasets, that cannot entirely fit in the GPU memory. BANG stands out by harnessing a compressed form of the dataset on a single GPU to perform distance computations while efficiently accessing the graph index kept on the host memory, enabling efficient ANNS on large graphs within the limited GPU memory. BANG incorporates highly-optimized GPU kernels and proceeds in phases that run concurrently on the GPU and CPU, taking advantage of their architectural specificities. We evaluate BANG using a single NVIDIA Ampere A100 GPU on three popular ANN benchmark datasets. BANG outperforms the state-of-the-art comprehensively. Notably, on the billion-size datasets, we achieve throughputs 30x-200x more the competing methods for a high recall value of 0.9.
Problem

Research questions and friction points this paper is trying to address.

Handles billion-scale datasets exceeding GPU memory limits
Efficiently accesses graph index from host memory during search
Optimizes performance through concurrent GPU-CPU processing phases
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compressed dataset on GPU for distance computations
Graph index efficiently accessed from host memory
Concurrent GPU-CPU phases with optimized kernels
🔎 Similar Papers
No similar papers found.
V
V. Karthik
IIT Hyderabad, India
S
Saim Khan
IIT Hyderabad, India
S
Somesh Singh
LabEx MILYON and LIP (UMR5668), France
H
H. Simhadri
Microsoft Research, USA
J
Jyothi Vedurada
IIT Hyderabad, India