Ruvector GNN
Graph Neural Network layer for Ruvector on HNSW topology with SIMD-accelerated message passing.
ruvector-gnn provides production-ready Graph Neural Network implementations optimized for vector database topologies. It enables learned representations over HNSW index structures for enhanced similarity search and graph-based learning. Part of the Ruvector ecosystem.
Why Ruvector GNN?
- HNSW-Native: GNN operations directly on HNSW graph structure
- SIMD Optimized: Hardware-accelerated aggregation operations
- Memory Efficient: Memory-mapped weight storage for large models
- Production Ready: Battle-tested with comprehensive benchmarks
- Cross-Platform: Native, Node.js, and WASM support
Features
Core Capabilities
- Message Passing: Efficient neighbor aggregation on HNSW graphs
- GCN Layers: Graph Convolutional Network implementations
- GAT Layers: Graph Attention Networks with multi-head attention
- GraphSAGE: Inductive representation learning
- Node Embeddings: Learnable node feature transformations
- Batch Processing: Parallel message passing with Rayon
Advanced Features
- Memory Mapping: Large model support via mmap
- Quantization: INT8/FP16 weight quantization
- Custom Aggregators: Mean, max, LSTM aggregation
- Skip Connections: Residual connections for deep networks
- Dropout: Regularization during training
- Layer Normalization: Stable training dynamics
Installation
Add ruvector-gnn to your Cargo.toml:
[]
= "0.1.1"
Feature Flags
[]
# Default with SIMD and memory mapping
= { = "0.1.1", = ["simd", "mmap"] }
# WASM-compatible build
= { = "0.1.1", = false, = ["wasm"] }
# Node.js bindings
= { = "0.1.1", = ["napi"] }
Available features:
simd(default): SIMD-optimized operationsmmap(default): Memory-mapped weight storagewasm: WebAssembly-compatible buildnapi: Node.js bindings via NAPI-RS
Quick Start
Basic GNN Layer
use ;
use Array2;
Graph Attention Network
use ;
// Configure multi-head attention
let config = AttentionConfig ;
let gat = new?;
// Forward with attention
let = gat.forward_with_attention?;
// Attention weights for interpretability
for in attention_weights.iter.enumerate
GraphSAGE with Custom Aggregator
use ;
let config = SAGEConfig ;
let sage = new?;
// Mini-batch training with neighbor sampling
let embeddings = sage.forward_minibatch?;
Integration with Ruvector Core
use VectorDB;
use ;
// Load vector database
let db = open?;
// Create GNN that operates on HNSW structure
let gnn = new?;
// Get HNSW neighbors for message passing
let hnsw_graph = db.get_hnsw_graph?;
// Compute GNN embeddings
let gnn_embeddings = gnn.encode?;
// Enhanced search using GNN embeddings
let results = db.search_with_gnn?;
API Overview
Core Types
// GNN layer configuration
// Message passing interface
// Layer types
Layer Operations
Performance
Benchmarks (100K Nodes, Avg Degree 16)
Operation Latency (p50) GFLOPS
─────────────────────────────────────────────────
GCN forward (1 layer) ~15ms 12.5
GAT forward (8 heads) ~45ms 8.2
GraphSAGE (2 layers) ~25ms 10.1
Message aggregation ~5ms 25.0
Memory Usage
Model Size Peak Memory
─────────────────────────────────────
128 -> 64 (1 layer) ~50MB
128 -> 64 (4 layers) ~150MB
With mmap weights ~10MB (+ disk)
Related Crates
- ruvector-core - Core vector database engine
- ruvector-gnn-node - Node.js bindings
- ruvector-gnn-wasm - WebAssembly bindings
- ruvector-graph - Graph database engine
Documentation
- Main README - Complete project overview
- API Documentation - Full API reference
- GitHub Repository - Source code
License
MIT License - see LICENSE for details.