Expand description
§GraphMAE: Masked Autoencoders for Graphs
Self-supervised graph learning via masked feature reconstruction. Traditional supervised graph learning requires expensive node/edge labels that are scarce in real-world graphs. GraphMAE learns representations by masking and reconstructing node features, requiring zero labels. The learned embeddings transfer well to downstream tasks (classification, link prediction, clustering) because the model must capture structural and semantic graph properties to reconstruct masked features from their neighborhood context.
Pipeline: Mask -> GAT Encode -> Re-mask latent -> Decode masked only -> SCE loss.
Reference: Hou et al., “GraphMAE: Self-Supervised Masked Graph Autoencoders”, KDD 2022.
Structs§
- Feature
Masking - Feature masking strategies for GraphMAE.
- GATEncoder
- Multi-layer GAT encoder for GraphMAE.
- Graph
Data - Sparse graph representation.
- GraphMAE
- GraphMAE self-supervised model.
- GraphMAE
Config - Configuration for a GraphMAE model.
- GraphMAE
Decoder - Decoder that reconstructs only masked node features (key efficiency gain).
- Mask
Result - Result of masking node features.
Enums§
- LossFn
- Loss function variant for reconstruction.