tract-core 0.5.6

Tiny, no-nonsense, self contained, TensorFlow and ONNX inference
Documentation

Tract

Tiny, no-nonsense, self contained, portable TensorFlow and ONNX inference.

Example

# extern crate tract_core;
# fn main() {
use tract_core::internal::*;

// build a simple model that just add 3 to each input component
let mut model = InferenceModel::default();

let input = model.add_source("input", InferenceFact::default()).unwrap();
let three = model.add_const("three".to_string(), tensor0(3f32)).unwrap();
let add = model.wire_node("add".to_string(),
tract_core::ops::math::add::bin(),
[input, three].as_ref()
).unwrap();

model.auto_outputs().unwrap();

// We build an execution plan. Default inputs and outputs are inferred from
// the model graph.
let plan = SimplePlan::new(&model).unwrap();

// run the computation.
let input = tensor1(&[1.0f32, 2.5, 5.0]);
let mut outputs = plan.run(tvec![input]).unwrap();

// take the first and only output tensor
let mut tensor = outputs.pop().unwrap();

assert_eq!(tensor, rctensor1(&[4.0f32, 5.5, 8.0]));
# }

While creating a model from Rust code is useful for testing the library, real-life use-cases will usually load a TensorFlow or ONNX model using tract-tensorflow or tract-onnx crates.