🤖 AI Summary
To address low cache efficiency and poor disk access locality in TB-scale high-dimensional vector similarity search on SSDs—caused by co-locating graph indices and vector data—this paper is the first to explicitly distinguish the distinct memory access patterns: proximity graphs (high-frequency, random access) versus raw vectors (low-frequency, bulk access). We propose a graph-centric data layout and caching optimization: (1) a compact adjacency-list memory caching mechanism for graph structures, and (2) a redesigned on-disk block format to enhance I/O locality. Experiments under disk-resident settings demonstrate that our approach achieves over 60% higher average query throughput and over 35% lower latency compared to state-of-the-art systems, significantly alleviating the I/O bottleneck in large-scale vector retrieval.
📝 Abstract
Similarity-based vector search underpins many important applications, but a key challenge is processing massive vector datasets (e.g., in TBs). To reduce costs, some systems utilize SSDs as the primary data storage. They employ a proximity graph, which connects similar vectors to form a graph and is the state-of-the-art index for vector search. However, these systems are hindered by sub-optimal data layouts that fail to effectively utilize valuable memory space to reduce disk access and suffer from poor locality for accessing disk-resident data. Through extensive profiling and analysis, we found that the structure of the proximity graph index is accessed more frequently than the vectors themselves, yet existing systems do not distinguish between the two. To address this problem, we design the Gorgeous system with the principle of prioritizing graph structure over vectors. Specifically, Gorgeous features a memory cache that keeps the adjacency lists of graph nodes to improve cache hits and a disk block format that explicitly stores neighbors' adjacency lists along with a vector to enhance data locality. Experimental results show that Gorgeous consistently outperforms two state-of-the-art disk-based systems for vector search, boosting average query throughput by over 60% and reducing query latency by over 35%.