Expand description
tensor_forge is a minimal compute graph runtime for tensor operations.
The crate provides:
Graph, a directed acyclic compute graph of tensor operations,Executor, a deterministic execution engine for evaluating graphs,KernelRegistry, a pluggable registry mappingOpKindvalues to kernels,Tensor, the runtime tensor value type used for inputs and outputs.
Graphs are constructed by adding input and operation nodes, marking one or more nodes as outputs, and then executing the graph with runtime input bindings.
§Core workflow
A typical workflow is:
- create a
Graph, - add input and operation nodes,
- mark output nodes,
- construct an
Executorwith aKernelRegistry, - execute the graph with
(NodeId, Tensor)input bindings.
§Examples
use tensor_forge::{Executor, Graph, KernelRegistry, Tensor};
let mut g = Graph::new();
let a = g.input_node(vec![2, 2]);
let b = g.input_node(vec![2, 2]);
let out = g.add(a, b).expect("Valid add operation should succeed");
g.set_output_node(out)
.expect("Setting output node should succeed");
let a_tensor = Tensor::zeros(vec![2, 2]).expect("Tensor allocation should succeed");
let b_tensor = Tensor::zeros(vec![2, 2]).expect("Tensor allocation should succeed");
let expected = Tensor::from_vec(vec![2, 2], vec![0_f64, 0_f64, 0_f64, 0_f64]).expect("Tensor allocation should
succeed");
let exec = Executor::new(KernelRegistry::default()); // imports default kernel operation mappings
let outputs = exec
.execute(&g, vec![(a, a_tensor), (b, b_tensor)])
.expect("Execution should succeed");
// `outputs` now contains the resulting tensors of nodes marked as outputs
assert!(outputs.contains_key(&out));
let output_tensor: &Tensor = &outputs[&out];
assert_eq!(output_tensor.shape(), expected.shape());
assert_eq!(output_tensor.data().len(), expected.data().len());
assert_eq!(output_tensor.data(), expected.data());See the examples/ directory for larger runnable examples, including:
add_graph.rs# introductory examplebranching_graph.rs# complex chain example withAdd,ReLU,MatMulfeedforward_neural_net.rs# programmatic neural network generationcustom_kernel.rs# defining custom kernels
§Module overview
executorcontains graph execution and execution-time errors.graphcontains graph construction, validation, and topology utilities.kerneldefines the kernel trait and kernel-level errors.nodedefines graph node identifiers and node metadata.opdefines supported operation kinds.registrycontains the kernel registry.tensordefines the tensor value type.
Re-exports§
pub use executor::ExecutionError;pub use executor::Executor;pub use graph::Graph;pub use graph::GraphError;pub use kernel::Kernel;pub use kernel::KernelError;pub use node::Node;pub use node::NodeId;pub use op::OpKind;pub use registry::KernelRegistry;pub use tensor::Tensor;
Modules§
- executor
- Execution engine for evaluating compute graphs against a
KernelRegistry. - graph
- Structure for representing ML runtimes via Node and Op intermediate representation.
- kernel
- Defines runtime-executable compute kernels.
- node
- Representations one operation instance in an ML graph.
- op
- Defines support ML operations.
- registry
- Kernel registry for runtime dispatch of compute operations.
- tensor
- Representations for dense, multidimensional arrays stored in contiguous memory.