Skip to main content

Module graphmae

Module graphmae 

Source
Expand description

§GraphMAE: Masked Autoencoders for Graphs

Self-supervised graph learning via masked feature reconstruction. Traditional supervised graph learning requires expensive node/edge labels that are scarce in real-world graphs. GraphMAE learns representations by masking and reconstructing node features, requiring zero labels. The learned embeddings transfer well to downstream tasks (classification, link prediction, clustering) because the model must capture structural and semantic graph properties to reconstruct masked features from their neighborhood context.

Pipeline: Mask -> GAT Encode -> Re-mask latent -> Decode masked only -> SCE loss.

Reference: Hou et al., “GraphMAE: Self-Supervised Masked Graph Autoencoders”, KDD 2022.

Structs§

FeatureMasking
Feature masking strategies for GraphMAE.
GATEncoder
Multi-layer GAT encoder for GraphMAE.
GraphData
Sparse graph representation.
GraphMAE
GraphMAE self-supervised model.
GraphMAEConfig
Configuration for a GraphMAE model.
GraphMAEDecoder
Decoder that reconstructs only masked node features (key efficiency gain).
MaskResult
Result of masking node features.

Enums§

LossFn
Loss function variant for reconstruction.

Functions§

mse_loss
Mean Squared Error across masked node reconstructions.
sce_loss
Scaled Cosine Error: mean((1 - cos_sim(pred, target))^gamma) over masked nodes.