Expand description
Graph Neural Network layers for learning on graph-structured data.
This module provides GNN layers commonly used for:
- AST/CFG analysis in transpilers (depyler, ruchy, bashrs)
- Code structure understanding
- Error pattern detection in CITL
§Implemented Layers
GCNConv- Graph Convolutional Network (Kipf & Welling, 2017)GATConv- Graph Attention Network (Veličković et al., 2018)SAGEConv-GraphSAGE(Hamilton et al., 2017)
§Example
ⓘ
use aprender::nn::gnn::{GCNConv, GATConv, SAGEConv, AdjacencyMatrix};
// Create adjacency matrix for a simple graph
let adj = AdjacencyMatrix::from_edge_index(&[[0, 1], [1, 2], [2, 0]], 3);
// GCN layer
let gcn = GCNConv::new(64, 32);
let x = Tensor::new(&vec![0.0; 3 * 64], &[3, 64]); // 3 nodes, 64 features
let out = gcn.forward(&x, &adj); // [3, 32]§References
- Kipf, T. N., & Welling, M. (2017). Semi-Supervised Classification with Graph Convolutional Networks. ICLR.
- Veličković, P., et al. (2018). Graph Attention Networks. ICLR.
- Hamilton, W. L., et al. (2017). Inductive Representation Learning on
Large Graphs.
NeurIPS.
Structs§
- Adjacency
Matrix - Adjacency matrix representation for GNN operations.
- GATConv
- Graph Attention Network layer (Veličković et al., 2018).
- GCNConv
- Graph Convolutional Network layer (Kipf & Welling, 2017).
- SAGE
Conv GraphSAGEconvolutional layer (Hamilton et al., 2017).
Enums§
- SAGE
Aggregation - Aggregation method for
GraphSAGE.
Traits§
- Message
Passing - Message Passing Neural Network base trait.