Skip to main content

GraphTransformer

Struct GraphTransformer 

Source
pub struct GraphTransformer { /* private fields */ }
Expand description

Graph Transformer with proof-gated operations for Node.js.

Provides sublinear attention over graph structures, physics-informed layers (Hamiltonian dynamics), biologically-inspired learning (spiking networks, Hebbian plasticity), and verified training with proof receipts.

§Example

const { GraphTransformer } = require('ruvector-graph-transformer-node');
const gt = new GraphTransformer();
console.log(gt.version());

Implementations§

Source§

impl GraphTransformer

Source

pub fn instance_of<V: NapiRaw>(env: Env, value: V) -> Result<bool>

Source§

impl GraphTransformer

Source§

impl GraphTransformer

Source

pub fn new(_config: Option<Value>) -> Self

Create a new Graph Transformer instance.

§Arguments
  • config - Optional JSON configuration (reserved for future use)
§Example
const gt = new GraphTransformer();
const gt2 = new GraphTransformer({ maxFuel: 10000 });
Source

pub fn version(&self) -> String

Get the library version string.

§Example
console.log(gt.version()); // "2.0.4"
Source

pub fn create_proof_gate(&mut self, dim: u32) -> Result<Value>

Create a proof gate for a given dimension.

Returns a JSON object describing the gate (id, dimension, verified).

§Arguments
  • dim - The dimension to gate on
§Example
const gate = gt.createProofGate(128);
console.log(gate.dimension); // 128
Source

pub fn prove_dimension(&mut self, expected: u32, actual: u32) -> Result<Value>

Prove that two dimensions are equal.

Returns a proof result with proof_id, expected, actual, and verified fields.

§Arguments
  • expected - The expected dimension
  • actual - The actual dimension
§Example
const proof = gt.proveDimension(128, 128);
console.log(proof.verified); // true
Source

pub fn create_attestation(&self, proof_id: u32) -> Result<Vec<u8>>

Create a proof attestation (serializable receipt) for a given proof ID.

Returns the attestation as a byte buffer (82 bytes) that can be embedded in RVF WITNESS_SEG entries.

§Arguments
  • proof_id - The proof term ID to create an attestation for
§Example
const proof = gt.proveDimension(64, 64);
const attestation = gt.createAttestation(proof.proof_id);
console.log(attestation.length); // 82
Source

pub fn compose_proofs(&mut self, stages: Vec<Value>) -> Result<Value>

Compose a chain of pipeline stages, verifying type compatibility.

Each stage must have name, input_type_id, and output_type_id. Returns a composed proof with the overall input/output types and the number of stages verified.

§Arguments
  • stages - Array of stage descriptors as JSON objects
§Example
const composed = gt.composeProofs([
  { name: 'embed', input_type_id: 1, output_type_id: 2 },
  { name: 'align', input_type_id: 2, output_type_id: 3 },
]);
console.log(composed.chain_name); // "embed >> align"
Source

pub fn verify_attestation(&self, bytes: Vec<u8>) -> bool

Verify an attestation from its byte representation.

Returns true if the attestation is structurally valid.

§Arguments
  • bytes - The attestation bytes (82 bytes minimum)
§Example
const valid = gt.verifyAttestation(attestationBytes);
Source

pub fn sublinear_attention( &mut self, query: Vec<f64>, edges: Vec<Vec<u32>>, dim: u32, k: u32, ) -> Result<Value>

Sublinear graph attention using personalized PageRank sparsification.

Instead of attending to all N nodes (O(Nd)), uses PPR to select the top-k most relevant nodes, achieving O(kd) complexity.

§Arguments
  • query - Query vector (length must equal dim)
  • edges - Adjacency list: edges[i] is the list of neighbor indices for node i
  • dim - Dimension of the query vector
  • k - Number of top nodes to attend to
§Returns

JSON object with scores, top_k_indices, and sparsity_ratio

§Example
const result = gt.sublinearAttention([1.0, 0.5], [[1, 2], [0, 2], [0, 1]], 2, 2);
console.log(result.top_k_indices);
Source

pub fn ppr_scores( &mut self, source: u32, adjacency: Vec<Vec<u32>>, alpha: f64, ) -> Result<Vec<f64>>

Compute personalized PageRank scores from a source node.

§Arguments
  • source - Source node index
  • adjacency - Adjacency list for the graph
  • alpha - Teleport probability (typically 0.15)
§Returns

Array of PPR scores, one per node

§Example
const scores = gt.pprScores(0, [[1], [0, 2], [1]], 0.15);
Source

pub fn hamiltonian_step( &mut self, positions: Vec<f64>, momenta: Vec<f64>, dt: f64, ) -> Result<Value>

Symplectic integrator step (leapfrog / Stormer-Verlet).

Integrates Hamiltonian dynamics with a harmonic potential V(q) = 0.5*|q|^2, preserving the symplectic structure (energy-conserving).

§Arguments
  • positions - Position coordinates
  • momenta - Momentum coordinates (same length as positions)
  • dt - Time step
§Returns

JSON object with positions, momenta, and energy

§Example
const state = gt.hamiltonianStep([1.0, 0.0], [0.0, 1.0], 0.01);
console.log(state.energy);
Source

pub fn hamiltonian_step_graph( &mut self, positions: Vec<f64>, momenta: Vec<f64>, edges: Vec<Value>, dt: f64, ) -> Result<Value>

Hamiltonian step with graph edge interactions.

positions and momenta are arrays of coordinates. edges is an array of { src, tgt } objects defining graph interactions.

§Returns

JSON object with positions, momenta, energy, and energy_conserved

§Example
const state = gt.hamiltonianStepGraph(
  [1.0, 0.0], [0.0, 1.0],
  [{ src: 0, tgt: 1 }], 0.01
);
Source

pub fn spiking_attention( &mut self, spikes: Vec<f64>, edges: Vec<Vec<u32>>, threshold: f64, ) -> Result<Vec<f64>>

Spiking neural attention: event-driven sparse attention.

Nodes emit attention only when their membrane potential exceeds a threshold, producing sparse activation patterns.

§Arguments
  • spikes - Membrane potentials for each node
  • edges - Adjacency list for the graph
  • threshold - Firing threshold
§Returns

Output activation vector (one value per node)

§Example
const output = gt.spikingAttention([0.5, 1.5, 0.3], [[1], [0, 2], [1]], 1.0);
Source

pub fn hebbian_update( &mut self, pre: Vec<f64>, post: Vec<f64>, weights: Vec<f64>, lr: f64, ) -> Result<Vec<f64>>

Hebbian learning rule update.

Applies the outer-product Hebbian rule: w_ij += lr * pre_i * post_j. The weight vector is a flattened (pre.len * post.len) matrix.

§Arguments
  • pre - Pre-synaptic activations
  • post - Post-synaptic activations
  • weights - Current weight vector (flattened matrix)
  • lr - Learning rate
§Returns

Updated weight vector

§Example
const updated = gt.hebbianUpdate([1.0, 0.0], [0.0, 1.0], [0, 0, 0, 0], 0.1);
Source

pub fn spiking_step( &mut self, features: Vec<Vec<f64>>, adjacency: Vec<f64>, ) -> Result<Value>

Spiking step over 2D node features with adjacency matrix.

features is an array of arrays (n x dim). adjacency is a flat row-major array (n x n). Returns { features, spikes, weights }.

§Example
const result = gt.spikingStep(
  [[0.8, 0.6], [0.1, 0.2]],
  [0, 0.5, 0.3, 0]
);
Source

pub fn verified_step( &mut self, weights: Vec<f64>, gradients: Vec<f64>, lr: f64, ) -> Result<Value>

A single verified SGD step with proof of gradient application.

Applies w’ = w - lr * grad and returns the new weights along with a proof receipt, loss before/after, and gradient norm.

§Arguments
  • weights - Current weight vector
  • gradients - Gradient vector (same length as weights)
  • lr - Learning rate
§Returns

JSON object with weights, proof_id, loss_before, loss_after, gradient_norm

§Example
const result = gt.verifiedStep([1.0, 2.0], [0.1, 0.2], 0.01);
console.log(result.loss_after < result.loss_before); // true
Source

pub fn verified_training_step( &mut self, features: Vec<f64>, targets: Vec<f64>, weights: Vec<f64>, ) -> Result<Value>

Verified training step with features, targets, and weights.

Computes MSE loss, applies SGD, and produces a training certificate.

§Arguments
  • features - Input feature vector
  • targets - Target values
  • weights - Current weight vector
§Returns

JSON object with weights, certificate_id, loss, loss_monotonic, lipschitz_satisfied

§Example
const result = gt.verifiedTrainingStep([1.0, 2.0], [0.5, 1.0], [0.5, 0.5]);
Source

pub fn product_manifold_distance( &self, a: Vec<f64>, b: Vec<f64>, curvatures: Vec<f64>, ) -> f64

Product manifold distance (mixed curvature spaces).

Splits vectors into sub-spaces according to the curvatures array:

  • curvature > 0: spherical distance
  • curvature < 0: hyperbolic distance
  • curvature == 0: Euclidean distance
§Arguments
  • a - First point
  • b - Second point (same length as a)
  • curvatures - Curvature for each sub-space
§Returns

The product manifold distance as a number

§Example
const d = gt.productManifoldDistance([1, 0, 0, 1], [0, 1, 1, 0], [0.0, -1.0]);
Source

pub fn product_manifold_attention( &mut self, features: Vec<f64>, edges: Vec<Value>, ) -> Result<Value>

Product manifold attention with mixed curvatures.

Computes attention in a product of spherical, hyperbolic, and Euclidean subspaces, combining the results.

§Arguments
  • features - Input feature vector
  • edges - Array of { src, tgt } objects
§Returns

JSON object with output, curvatures, distances

§Example
const result = gt.productManifoldAttention(
  [1.0, 0.5, -0.3, 0.8],
  [{ src: 0, tgt: 1 }]
);
Source

pub fn causal_attention( &mut self, query: Vec<f64>, keys: Vec<Vec<f64>>, timestamps: Vec<f64>, ) -> Result<Vec<f64>>

Causal attention with temporal ordering.

Attention scores are masked so that a key at time t_j can only attend to queries at time t_i <= t_j (no information leakage from the future).

§Arguments
  • query - Query vector
  • keys - Array of key vectors
  • timestamps - Timestamp for each key (same length as keys)
§Returns

Softmax attention weights (one per key, sums to 1.0)

§Example
const scores = gt.causalAttention(
  [1.0, 0.0],
  [[1.0, 0.0], [0.0, 1.0], [0.5, 0.5]],
  [1.0, 2.0, 3.0]
);
Source

pub fn causal_attention_graph( &mut self, features: Vec<f64>, timestamps: Vec<f64>, edges: Vec<Value>, ) -> Result<Vec<f64>>

Causal attention over features, timestamps, and graph edges.

Returns attention-weighted output features where each node can only attend to neighbors with earlier or equal timestamps.

§Arguments
  • features - Feature value for each node
  • timestamps - Timestamp for each node
  • edges - Array of { src, tgt } objects
§Returns

Array of attention-weighted output values

§Example
const output = gt.causalAttentionGraph(
  [1.0, 0.5, 0.8],
  [1.0, 2.0, 3.0],
  [{ src: 0, tgt: 1 }, { src: 1, tgt: 2 }]
);
Source

pub fn granger_extract( &mut self, attention_history: Vec<f64>, num_nodes: u32, num_steps: u32, ) -> Result<Value>

Extract Granger causality DAG from attention history.

Tests pairwise Granger causality between all nodes and returns edges where the F-statistic exceeds the significance threshold.

§Arguments
  • attention_history - Flat array (T x N, row-major)
  • num_nodes - Number of nodes N
  • num_steps - Number of time steps T
§Returns

JSON object with edges and num_nodes

§Example
const dag = gt.grangerExtract(flatHistory, 3, 20);
console.log(dag.edges); // [{ source, target, f_statistic, is_causal }]
Source

pub fn game_theoretic_attention( &mut self, features: Vec<f64>, edges: Vec<Value>, ) -> Result<Value>

Game-theoretic attention: computes Nash equilibrium allocations.

Each node is a player with features as utility parameters. Edges define strategic interactions. Uses best-response iteration to converge to Nash equilibrium.

§Arguments
  • features - Feature/utility value for each node
  • edges - Array of { src, tgt } objects
§Returns

JSON object with allocations, utilities, nash_gap, converged

§Example
const result = gt.gameTheoreticAttention(
  [1.0, 0.5, 0.8],
  [{ src: 0, tgt: 1 }, { src: 1, tgt: 2 }]
);
console.log(result.converged); // true
Source

pub fn stats(&self) -> Value

Get aggregate statistics as a JSON object.

§Example
const stats = gt.stats();
console.log(stats.proofs_verified);
Source

pub fn reset(&mut self)

Reset all internal state (caches, counters, gates).

§Example
gt.reset();

Trait Implementations§

Source§

impl FromNapiMutRef for GraphTransformer

Source§

unsafe fn from_napi_mut_ref( env: napi_env, napi_val: napi_value, ) -> Result<&'static mut Self>

Safety Read more
Source§

impl FromNapiRef for GraphTransformer

Source§

unsafe fn from_napi_ref( env: napi_env, napi_val: napi_value, ) -> Result<&'static Self>

Safety Read more
Source§

impl FromNapiValue for &GraphTransformer

Source§

unsafe fn from_napi_value(env: napi_env, napi_val: napi_value) -> Result<Self>

Safety Read more
Source§

fn from_unknown(value: JsUnknown) -> Result<Self, Error>

Source§

impl FromNapiValue for &mut GraphTransformer

Source§

unsafe fn from_napi_value(env: napi_env, napi_val: napi_value) -> Result<Self>

Safety Read more
Source§

fn from_unknown(value: JsUnknown) -> Result<Self, Error>

Source§

impl ObjectFinalize for GraphTransformer

Source§

fn finalize(self, env: Env) -> Result<(), Error>

Source§

impl ToNapiValue for GraphTransformer

Source§

impl TypeName for &GraphTransformer

Source§

impl TypeName for &mut GraphTransformer

Source§

impl TypeName for GraphTransformer

Source§

impl ValidateNapiValue for &GraphTransformer

Source§

unsafe fn validate(env: napi_env, napi_val: napi_value) -> Result<napi_value>

Safety Read more
Source§

impl ValidateNapiValue for &mut GraphTransformer

Source§

unsafe fn validate(env: napi_env, napi_val: napi_value) -> Result<napi_value>

Safety Read more

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> JsValuesTupleIntoVec for T
where T: ToNapiValue,

Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.