pub struct GraphTransformer { /* private fields */ }Expand description
Graph Transformer with proof-gated operations for Node.js.
Provides sublinear attention over graph structures, physics-informed layers (Hamiltonian dynamics), biologically-inspired learning (spiking networks, Hebbian plasticity), and verified training with proof receipts.
§Example
const { GraphTransformer } = require('ruvector-graph-transformer-node');
const gt = new GraphTransformer();
console.log(gt.version());Implementations§
Source§impl GraphTransformer
impl GraphTransformer
pub fn into_reference( val: GraphTransformer, env: Env, ) -> Result<Reference<GraphTransformer>>
pub fn into_instance(self, env: Env) -> Result<ClassInstance<GraphTransformer>>
Source§impl GraphTransformer
impl GraphTransformer
Sourcepub fn create_proof_gate(&mut self, dim: u32) -> Result<Value>
pub fn create_proof_gate(&mut self, dim: u32) -> Result<Value>
Sourcepub fn create_attestation(&self, proof_id: u32) -> Result<Vec<u8>>
pub fn create_attestation(&self, proof_id: u32) -> Result<Vec<u8>>
Create a proof attestation (serializable receipt) for a given proof ID.
Returns the attestation as a byte buffer (82 bytes) that can be embedded in RVF WITNESS_SEG entries.
§Arguments
proof_id- The proof term ID to create an attestation for
§Example
const proof = gt.proveDimension(64, 64);
const attestation = gt.createAttestation(proof.proof_id);
console.log(attestation.length); // 82Sourcepub fn compose_proofs(&mut self, stages: Vec<Value>) -> Result<Value>
pub fn compose_proofs(&mut self, stages: Vec<Value>) -> Result<Value>
Compose a chain of pipeline stages, verifying type compatibility.
Each stage must have name, input_type_id, and output_type_id.
Returns a composed proof with the overall input/output types and
the number of stages verified.
§Arguments
stages- Array of stage descriptors as JSON objects
§Example
const composed = gt.composeProofs([
{ name: 'embed', input_type_id: 1, output_type_id: 2 },
{ name: 'align', input_type_id: 2, output_type_id: 3 },
]);
console.log(composed.chain_name); // "embed >> align"Sourcepub fn verify_attestation(&self, bytes: Vec<u8>) -> bool
pub fn verify_attestation(&self, bytes: Vec<u8>) -> bool
Sourcepub fn sublinear_attention(
&mut self,
query: Vec<f64>,
edges: Vec<Vec<u32>>,
dim: u32,
k: u32,
) -> Result<Value>
pub fn sublinear_attention( &mut self, query: Vec<f64>, edges: Vec<Vec<u32>>, dim: u32, k: u32, ) -> Result<Value>
Sublinear graph attention using personalized PageRank sparsification.
Instead of attending to all N nodes (O(Nd)), uses PPR to select the top-k most relevant nodes, achieving O(kd) complexity.
§Arguments
query- Query vector (length must equaldim)edges- Adjacency list: edges[i] is the list of neighbor indices for node idim- Dimension of the query vectork- Number of top nodes to attend to
§Returns
JSON object with scores, top_k_indices, and sparsity_ratio
§Example
const result = gt.sublinearAttention([1.0, 0.5], [[1, 2], [0, 2], [0, 1]], 2, 2);
console.log(result.top_k_indices);Sourcepub fn ppr_scores(
&mut self,
source: u32,
adjacency: Vec<Vec<u32>>,
alpha: f64,
) -> Result<Vec<f64>>
pub fn ppr_scores( &mut self, source: u32, adjacency: Vec<Vec<u32>>, alpha: f64, ) -> Result<Vec<f64>>
Sourcepub fn hamiltonian_step(
&mut self,
positions: Vec<f64>,
momenta: Vec<f64>,
dt: f64,
) -> Result<Value>
pub fn hamiltonian_step( &mut self, positions: Vec<f64>, momenta: Vec<f64>, dt: f64, ) -> Result<Value>
Symplectic integrator step (leapfrog / Stormer-Verlet).
Integrates Hamiltonian dynamics with a harmonic potential V(q) = 0.5*|q|^2, preserving the symplectic structure (energy-conserving).
§Arguments
positions- Position coordinatesmomenta- Momentum coordinates (same length as positions)dt- Time step
§Returns
JSON object with positions, momenta, and energy
§Example
const state = gt.hamiltonianStep([1.0, 0.0], [0.0, 1.0], 0.01);
console.log(state.energy);Sourcepub fn hamiltonian_step_graph(
&mut self,
positions: Vec<f64>,
momenta: Vec<f64>,
edges: Vec<Value>,
dt: f64,
) -> Result<Value>
pub fn hamiltonian_step_graph( &mut self, positions: Vec<f64>, momenta: Vec<f64>, edges: Vec<Value>, dt: f64, ) -> Result<Value>
Hamiltonian step with graph edge interactions.
positions and momenta are arrays of coordinates. edges is an
array of { src, tgt } objects defining graph interactions.
§Returns
JSON object with positions, momenta, energy, and energy_conserved
§Example
const state = gt.hamiltonianStepGraph(
[1.0, 0.0], [0.0, 1.0],
[{ src: 0, tgt: 1 }], 0.01
);Sourcepub fn spiking_attention(
&mut self,
spikes: Vec<f64>,
edges: Vec<Vec<u32>>,
threshold: f64,
) -> Result<Vec<f64>>
pub fn spiking_attention( &mut self, spikes: Vec<f64>, edges: Vec<Vec<u32>>, threshold: f64, ) -> Result<Vec<f64>>
Spiking neural attention: event-driven sparse attention.
Nodes emit attention only when their membrane potential exceeds a threshold, producing sparse activation patterns.
§Arguments
spikes- Membrane potentials for each nodeedges- Adjacency list for the graphthreshold- Firing threshold
§Returns
Output activation vector (one value per node)
§Example
const output = gt.spikingAttention([0.5, 1.5, 0.3], [[1], [0, 2], [1]], 1.0);Sourcepub fn hebbian_update(
&mut self,
pre: Vec<f64>,
post: Vec<f64>,
weights: Vec<f64>,
lr: f64,
) -> Result<Vec<f64>>
pub fn hebbian_update( &mut self, pre: Vec<f64>, post: Vec<f64>, weights: Vec<f64>, lr: f64, ) -> Result<Vec<f64>>
Hebbian learning rule update.
Applies the outer-product Hebbian rule: w_ij += lr * pre_i * post_j. The weight vector is a flattened (pre.len * post.len) matrix.
§Arguments
pre- Pre-synaptic activationspost- Post-synaptic activationsweights- Current weight vector (flattened matrix)lr- Learning rate
§Returns
Updated weight vector
§Example
const updated = gt.hebbianUpdate([1.0, 0.0], [0.0, 1.0], [0, 0, 0, 0], 0.1);Sourcepub fn spiking_step(
&mut self,
features: Vec<Vec<f64>>,
adjacency: Vec<f64>,
) -> Result<Value>
pub fn spiking_step( &mut self, features: Vec<Vec<f64>>, adjacency: Vec<f64>, ) -> Result<Value>
Spiking step over 2D node features with adjacency matrix.
features is an array of arrays (n x dim). adjacency is a flat
row-major array (n x n). Returns { features, spikes, weights }.
§Example
const result = gt.spikingStep(
[[0.8, 0.6], [0.1, 0.2]],
[0, 0.5, 0.3, 0]
);Sourcepub fn verified_step(
&mut self,
weights: Vec<f64>,
gradients: Vec<f64>,
lr: f64,
) -> Result<Value>
pub fn verified_step( &mut self, weights: Vec<f64>, gradients: Vec<f64>, lr: f64, ) -> Result<Value>
A single verified SGD step with proof of gradient application.
Applies w’ = w - lr * grad and returns the new weights along with a proof receipt, loss before/after, and gradient norm.
§Arguments
weights- Current weight vectorgradients- Gradient vector (same length as weights)lr- Learning rate
§Returns
JSON object with weights, proof_id, loss_before, loss_after, gradient_norm
§Example
const result = gt.verifiedStep([1.0, 2.0], [0.1, 0.2], 0.01);
console.log(result.loss_after < result.loss_before); // trueSourcepub fn verified_training_step(
&mut self,
features: Vec<f64>,
targets: Vec<f64>,
weights: Vec<f64>,
) -> Result<Value>
pub fn verified_training_step( &mut self, features: Vec<f64>, targets: Vec<f64>, weights: Vec<f64>, ) -> Result<Value>
Verified training step with features, targets, and weights.
Computes MSE loss, applies SGD, and produces a training certificate.
§Arguments
features- Input feature vectortargets- Target valuesweights- Current weight vector
§Returns
JSON object with weights, certificate_id, loss,
loss_monotonic, lipschitz_satisfied
§Example
const result = gt.verifiedTrainingStep([1.0, 2.0], [0.5, 1.0], [0.5, 0.5]);Sourcepub fn product_manifold_distance(
&self,
a: Vec<f64>,
b: Vec<f64>,
curvatures: Vec<f64>,
) -> f64
pub fn product_manifold_distance( &self, a: Vec<f64>, b: Vec<f64>, curvatures: Vec<f64>, ) -> f64
Product manifold distance (mixed curvature spaces).
Splits vectors into sub-spaces according to the curvatures array:
- curvature > 0: spherical distance
- curvature < 0: hyperbolic distance
- curvature == 0: Euclidean distance
§Arguments
a- First pointb- Second point (same length asa)curvatures- Curvature for each sub-space
§Returns
The product manifold distance as a number
§Example
const d = gt.productManifoldDistance([1, 0, 0, 1], [0, 1, 1, 0], [0.0, -1.0]);Sourcepub fn product_manifold_attention(
&mut self,
features: Vec<f64>,
edges: Vec<Value>,
) -> Result<Value>
pub fn product_manifold_attention( &mut self, features: Vec<f64>, edges: Vec<Value>, ) -> Result<Value>
Product manifold attention with mixed curvatures.
Computes attention in a product of spherical, hyperbolic, and Euclidean subspaces, combining the results.
§Arguments
features- Input feature vectoredges- Array of{ src, tgt }objects
§Returns
JSON object with output, curvatures, distances
§Example
const result = gt.productManifoldAttention(
[1.0, 0.5, -0.3, 0.8],
[{ src: 0, tgt: 1 }]
);Sourcepub fn causal_attention(
&mut self,
query: Vec<f64>,
keys: Vec<Vec<f64>>,
timestamps: Vec<f64>,
) -> Result<Vec<f64>>
pub fn causal_attention( &mut self, query: Vec<f64>, keys: Vec<Vec<f64>>, timestamps: Vec<f64>, ) -> Result<Vec<f64>>
Causal attention with temporal ordering.
Attention scores are masked so that a key at time t_j can only attend to queries at time t_i <= t_j (no information leakage from the future).
§Arguments
query- Query vectorkeys- Array of key vectorstimestamps- Timestamp for each key (same length as keys)
§Returns
Softmax attention weights (one per key, sums to 1.0)
§Example
const scores = gt.causalAttention(
[1.0, 0.0],
[[1.0, 0.0], [0.0, 1.0], [0.5, 0.5]],
[1.0, 2.0, 3.0]
);Sourcepub fn causal_attention_graph(
&mut self,
features: Vec<f64>,
timestamps: Vec<f64>,
edges: Vec<Value>,
) -> Result<Vec<f64>>
pub fn causal_attention_graph( &mut self, features: Vec<f64>, timestamps: Vec<f64>, edges: Vec<Value>, ) -> Result<Vec<f64>>
Causal attention over features, timestamps, and graph edges.
Returns attention-weighted output features where each node can only attend to neighbors with earlier or equal timestamps.
§Arguments
features- Feature value for each nodetimestamps- Timestamp for each nodeedges- Array of{ src, tgt }objects
§Returns
Array of attention-weighted output values
§Example
const output = gt.causalAttentionGraph(
[1.0, 0.5, 0.8],
[1.0, 2.0, 3.0],
[{ src: 0, tgt: 1 }, { src: 1, tgt: 2 }]
);Sourcepub fn granger_extract(
&mut self,
attention_history: Vec<f64>,
num_nodes: u32,
num_steps: u32,
) -> Result<Value>
pub fn granger_extract( &mut self, attention_history: Vec<f64>, num_nodes: u32, num_steps: u32, ) -> Result<Value>
Extract Granger causality DAG from attention history.
Tests pairwise Granger causality between all nodes and returns edges where the F-statistic exceeds the significance threshold.
§Arguments
attention_history- Flat array (T x N, row-major)num_nodes- Number of nodes Nnum_steps- Number of time steps T
§Returns
JSON object with edges and num_nodes
§Example
const dag = gt.grangerExtract(flatHistory, 3, 20);
console.log(dag.edges); // [{ source, target, f_statistic, is_causal }]Sourcepub fn game_theoretic_attention(
&mut self,
features: Vec<f64>,
edges: Vec<Value>,
) -> Result<Value>
pub fn game_theoretic_attention( &mut self, features: Vec<f64>, edges: Vec<Value>, ) -> Result<Value>
Game-theoretic attention: computes Nash equilibrium allocations.
Each node is a player with features as utility parameters. Edges define strategic interactions. Uses best-response iteration to converge to Nash equilibrium.
§Arguments
features- Feature/utility value for each nodeedges- Array of{ src, tgt }objects
§Returns
JSON object with allocations, utilities, nash_gap, converged
§Example
const result = gt.gameTheoreticAttention(
[1.0, 0.5, 0.8],
[{ src: 0, tgt: 1 }, { src: 1, tgt: 2 }]
);
console.log(result.converged); // true