Disentangled Condensation for Large-scale Graphs

๐Ÿ“… 2024-01-18
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 3
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing graph compression methods for ultra-large-scale graphs suffer from low efficiency and poor convergence due to joint parameter optimization. To address this, we propose a GNN-free, two-stage decoupled graph compression framework. In the first stage, an anchor-based mechanism aligns node features and transfers neighborhood structural knowledge; in the second stage, edges are generated driven by neighborhood anchorsโ€”entirely without GNNs. This paradigm significantly reduces optimization complexity and enhances scalability. Experiments demonstrate over 10ร— speedup on medium-scale graphs with preserved accuracy; notably, our method achieves the first successful compression of the billion-node graph Ogbn-papers100M. Furthermore, downstream tasks on Ogbn-products show accuracy improvements exceeding 5% compared to prior approaches.

Technology Category

Application Category

๐Ÿ“ Abstract
Graph condensation has emerged as an intriguing technique to save the expensive training costs of Graph Neural Networks (GNNs) by substituting a condensed small graph with the original graph. Despite the promising results achieved, previous methods usually employ an entangled paradigm of redundant parameters (nodes, edges, GNNs), which incurs complex joint optimization during condensation. This paradigm has considerably impeded the scalability of graph condensation, making it challenging to condense extremely large-scale graphs and generate high-fidelity condensed graphs. Therefore, we propose to disentangle the condensation process into a two-stage GNN-free paradigm, independently condensing nodes and generating edges while eliminating the need to optimize GNNs at the same time. The node condensation module avoids the complexity of GNNs by focusing on node feature alignment with anchors of the original graph, while the edge translation module constructs the edges of the condensed nodes by transferring the original structure knowledge with neighborhood anchors. This simple yet effective approach achieves at least 10 times faster than state-of-the-art methods with comparable accuracy on medium-scale graphs. Moreover, the proposed DisCo can successfully scale up to the Ogbn-papers100M graph containing over 100 million nodes with flexible reduction rates and improves performance on the second-largest Ogbn-products dataset by over 5%. Extensive downstream tasks and ablation study on five common datasets further demonstrate the effectiveness of the proposed DisCo framework. Our code is available at https://github.com/BangHonor/DisCo.
Problem

Research questions and friction points this paper is trying to address.

Graph Compression
Efficiency
Parameter Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph Compression
Two-step Method
Ultra-large Graph Processing
๐Ÿ”Ž Similar Papers
No similar papers found.