🤖 AI Summary
This work investigates the theoretical justification of weight sharing (WS) in Variational Graph Autoencoders (VGAEs). Through rigorous theoretical analysis and systematic experiments across multiple graph benchmarks—including Cora and Citeseer—we establish, for the first time, the universal benefits of WS in VGAEs: it substantially reduces model complexity, improves generalization, and preserves near-identical performance in link prediction and node classification. We reveal that WS serves a dual role—simplifying optimization and acting as an implicit regularizer—thereby enhancing embedding stability and robustness. Our findings demonstrate that WS is not merely an empirical heuristic but a principled design choice grounded in both theoretical analysis and empirical validation. This work establishes WS as a default architectural recommendation for VGAEs and related variational graph representation learning frameworks.
📝 Abstract
This paper investigates the understudied practice of weight sharing (WS) in variational graph autoencoders (VGAE). WS presents both benefits and drawbacks for VGAE model design and node embedding learning, leaving its overall relevance unclear and the question of whether it should be adopted unresolved. We rigorously analyze its implications and, through extensive experiments on a wide range of graphs and VGAE variants, demonstrate that the benefits of WS consistently outweigh its drawbacks. Based on our findings, we recommend WS as an effective approach to optimize, regularize, and simplify VGAE models without significant performance loss.