🤖 AI Summary
To address the high communication overhead of multilingual persistent operations and the suboptimal cross-model processing efficiency caused by monolithic storage engines in existing multimodel databases, this paper proposes an integrated multimodel storage engine architecture. The architecture unifies heterogeneous storage engines, each specialized and optimized for a distinct data model; introduces a multi-stage hash join algorithm to enable efficient cross-model joins; and implements unified query plan compilation and coordinated execution across models. Experimental evaluation demonstrates that the system achieves up to 188× speedup over the best-performing baseline on representative multimodel analytical workloads, while significantly improving both performance and scalability under complex, mixed-model query loads.
📝 Abstract
Modern data analytic workloads increasingly require handling multiple data models simultaneously. Two primary approaches meet this need: polyglot persistence and multi-model database systems. Polyglot persistence employs a coordinator program to manage several independent database systems but suffers from high communication costs due to its physically disaggregated architecture. Meanwhile, existing multi-model database systems rely on a single storage engine optimized for a specific data model, resulting in inefficient processing across diverse data models. To address these limitations, we present M2, a multi-model analytic system with integrated storage engines. M2 treats all data models as first-class entities, composing query plans that incorporate operations across models. To effectively combine data from different models, the system introduces a specialized inter-model join algorithm called multi-stage hash join. Our evaluation demonstrates that M2 outperforms existing approaches by up to 188x speedup on multi-model analytics, confirming the effectiveness of our proposed techniques.