Graph Positional Autoencoders as Self-supervised Learners

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional graph autoencoders (GAEs) tend to overemphasize low-frequency signals on incomplete graphs, failing to capture discriminative structural information. To address this, we propose the Graph Positional Autoencoder (GraphPAE), a dual-path architecture jointly reconstructing node features and positional encodings. First, it incorporates learnable position encodings into message passing to enhance structural awareness. Second, it inverts node representations to recover positional information and explicitly approximates Laplacian eigenvectors, enabling joint modeling of multi-frequency structural patterns. Crucially, GraphPAE introduces the first feature–position co-reconstruction mechanism, transcending the limitations of conventional single-view masking paradigms. Extensive experiments demonstrate that GraphPAE achieves state-of-the-art performance across heterogeneous node classification, graph property prediction, and transfer learning tasks—outperforming strong baselines including GAE, VGAE, and DGI.

Technology Category

Application Category

📝 Abstract
Graph self-supervised learning seeks to learn effective graph representations without relying on labeled data. Among various approaches, graph autoencoders (GAEs) have gained significant attention for their efficiency and scalability. Typically, GAEs take incomplete graphs as input and predict missing elements, such as masked nodes or edges. While effective, our experimental investigation reveals that traditional node or edge masking paradigms primarily capture low-frequency signals in the graph and fail to learn the expressive structural information. To address these issues, we propose Graph Positional Autoencoders (GraphPAE), which employs a dual-path architecture to reconstruct both node features and positions. Specifically, the feature path uses positional encoding to enhance the message-passing processing, improving GAE's ability to predict the corrupted information. The position path, on the other hand, leverages node representations to refine positions and approximate eigenvectors, thereby enabling the encoder to learn diverse frequency information. We conduct extensive experiments to verify the effectiveness of GraphPAE, including heterophilic node classification, graph property prediction, and transfer learning. The results demonstrate that GraphPAE achieves state-of-the-art performance and consistently outperforms baselines by a large margin.
Problem

Research questions and friction points this paper is trying to address.

Traditional graph autoencoders fail to capture expressive structural information
Existing methods primarily learn low-frequency signals, missing diverse frequency data
Current node/edge masking lacks ability to reconstruct both features and positions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-path architecture for feature and position reconstruction
Positional encoding enhances message-passing processing
Node representations refine positions and approximate eigenvectors
🔎 Similar Papers
No similar papers found.