Engineering MultiQueues: Fast Relaxed Concurrent Priority Queues

📅 2021-07-03
🏛️ Embedded Systems and Applications
📈 Citations: 6
Influential: 3
📄 PDF
🤖 AI Summary
Traditional concurrent priority queues often become performance bottlenecks under parallel workloads, limiting throughput and increasing latency. This paper proposes MultiQueues—a high-throughput, low-latency concurrent priority queue with relaxed semantics. Methodologically, it employs a multi-sharded queue architecture, triple batching (for insertions, deletions, and load balancing), and a novel wait-free locking technique to enable lock-free transformation from sequential heap structures into efficient concurrent relaxed ones. It further introduces two quantitative metrics—rank error and delay—to rigorously characterize relaxation quality. Experimental evaluation on representative workloads—including online scheduling and discrete-event simulation—demonstrates 3–10× higher throughput over state-of-the-art designs, while maintaining bounded rank error and stable, low latency. MultiQueues thus achieves superior scalability and predictability without sacrificing practicality.

Technology Category

Application Category

📝 Abstract
Priority queues with parallel access are an attractive data structure for applications like prioritized online scheduling, discrete event simulation, or greedy algorithms. However, a classical priority queue constitutes a severe bottleneck in this context, leading to very small throughput. Hence, there has been significant interest in concurrent priority queues with relaxed semantics. We investigate the complementary quality criteria rank error (how close are deleted elements to the global minimum) and delay (for each element x, how many elements with lower priority are deleted before x). In this paper, we introduce MultiQueues as a natural approach to relaxed priority queues based on multiple sequential priority queues. Their naturally high theoretical scalability is further enhanced by using three orthogonal ways of batching operations on the sequential queues. Experiments indicate that MultiQueues present a very good performance-quality tradeoff and considerably outperform competing approaches in at least one of these aspects. We employ a seemingly paradoxical technique of"wait-free locking"that might be of more general interest to convert sequential data structures to relaxed concurrent data structures.
Problem

Research questions and friction points this paper is trying to address.

Addressing low throughput in parallel priority queues
Improving scalability via buffering and cache optimization
Balancing throughput and quality in relaxed semantics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses multiple sequential priority queues
Enhances scalability via buffering and batching
Employs wait-free locking technique
🔎 Similar Papers
No similar papers found.
M
Marvin Williams
Karlsruhe Institute of Technology, Germany
P
P. Sanders
Karlsruhe Institute of Technology, Germany
Roman Dementiev
Roman Dementiev
Intel