DAGs for the Masses

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing DAG-based consensus protocols suffer from high graph maintenance overhead and limited scalability, as each node linearly references all preceding nodes. To address this, we propose a sparse DAG consensus architecture. Our core innovation is that each node randomly references only a constant number of nodes from prior rounds, combined with causal relationship compression and a modified Bullshark protocol. This design preserves Byzantine fault tolerance for $f < n/3$ while drastically reducing metadata transmission and network load. Simulation results demonstrate that the approach achieves $O(1)$ communication complexity per node, improves throughput by 3.2×, reduces latency by 47%, and enables stable, low-latency consensus at scale—supporting up to ten thousand nodes. The proposed architecture establishes a novel, scalable paradigm for DAG-based consensus in large-scale distributed systems.

Technology Category

Application Category

📝 Abstract
A recent approach to building consensus protocols on top of Directed Acyclic Graphs (DAGs) shows much promise due to its simplicity and stable throughput. However, as each node in the DAG typically includes a linear number of references to the nodes in the previous round, prior DAG protocols only scale up to a certain point when the overhead of maintaining the graph becomes the bottleneck. To enable large-scale deployments of DAG-based protocols, we propose a sparse DAG architecture, where each node includes only a constant number of references to random nodes in the previous round. We present a sparse version of Bullshark -- one of the most prominent DAG-based consensus protocols -- and demonstrate its improved scalability. Remarkably, unlike other protocols that use random sampling to reduce communication complexity, we manage to avoid sacrificing resilience: the protocol can tolerate up to $f<n/3$ Byzantine faults (where $n$ is the number of participants), same as its less scalable deterministic counterpart. The proposed ``sparse'' methodology can be applied to any protocol that maintains disseminated system updates and causal relations between them in a graph-like structure. Our simulations show that the considerable reduction of transmitted metadata in sparse DAGs results in more efficient network utilization and better scalability.
Problem

Research questions and friction points this paper is trying to address.

Improving scalability of DAG-based consensus protocols
Reducing overhead in maintaining DAG graph structures
Maintaining Byzantine fault tolerance in sparse DAGs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse DAG architecture with constant references
Maintains resilience despite random sampling
Reduces metadata for better scalability
🔎 Similar Papers
No similar papers found.
M
Michael Anoprenko
Institut Polytechnique de Paris, France
A
Andrei Tonkikh
Aptos Labs
A
Alexander Spiegelman
Aptos Labs
Petr Kuznetsov
Petr Kuznetsov
Professor of Computer Science, Telecom Paris, Institut Polytechnique Paris
Distributed computingfault-tolerancesynchronization