🤖 AI Summary
Existing research predominantly focuses on broker-based messaging systems, leaving brokerless message libraries—such as ZeroMQ, NanoMsg, and NNG—without systematic, quantitative performance evaluation. Method: We design and open-source a standardized benchmarking framework to conduct the first multi-dimensional empirical assessment of mainstream brokerless messaging libraries, measuring throughput, end-to-end latency, behavior under diverse network conditions, and across communication patterns (e.g., request-reply, publish-subscribe). Our approach combines qualitative analysis with rigorous experimental measurements. Contribution/Results: The study identifies performance bottlenecks, operational boundaries, and inherent trade-offs among these libraries, thereby filling a critical gap in systematic evaluation. The framework is reproducible, extensible, and provides practitioners with empirically grounded, actionable guidance for selecting appropriate brokerless messaging middleware in real-world deployments.
📝 Abstract
Messaging systems are essential for efficiently transferring large volumes of data, ensuring rapid response times and high-throughput communication. The state-of-the-art on messaging systems mainly focuses on the performance evaluation of brokered messaging systems, which use an intermediate broker to guarantee reliability and quality of service. However, over the past decade, brokerless messaging systems have emerged, eliminating the single point of failure and trading off reliability guarantees for higher performance. Still, the state-of-the-art on evaluating the performance of brokerless systems is scarce. In this work, we solely focus on brokerless messaging systems. First, we perform a qualitative analysis of several possible candidates, to find the most promising ones. We then design and implement an extensive open-source benchmarking suite to systematically and fairly evaluate the performance of the chosen libraries, namely, ZeroMQ, NanoMsg, and NanoMsg-Next-Generation (NNG). We evaluate these libraries considering different metrics and workload conditions, and provide useful insights into their limitations. Our analysis enables practitioners to select the most suitable library for their requirements.