This crate is a (safe) wrapper around Microsoft’s ONNX Runtime through its C API.
From its GitHub page:
ONNX Runtime is a cross-platform, high performance ML inferencing and training accelerator.
The unsafe bindings are wrapped in this crate to expose a safe API.
For now, efforts are concentrated on the inference API. Training is not supported.
First, an environment must be created using and
let environment = Environment::builder() .with_name("test") .with_log_level(LoggingLevel::Verbose) .build()?;
Session is created from the environment, some options and an ONNX archive:
let mut session = environment .new_session_builder()? .with_optimization_level(GraphOptimizationLevel::Basic)? .with_number_threads(1)? .with_model_from_file("squeezenet.onnx")?;
let mut session = environment .new_session_builder()? .with_optimization_level(GraphOptimizationLevel::Basic)? .with_number_threads(1)? .with_model_downloaded(ImageClassification::SqueezeNet)?;
AvailableOnnxModel for the different models available
Inference will be run on data passed as an
let array = ndarray::Array::linspace(0.0_f32, 1.0, 100); // Multiple inputs and outputs are possible let input_tensor = vec![array]; let outputs: Vec<OrtOwnedTensor<f32,_>> = session.run(input_tensor)?;
The outputs are of type
OrtOwnedTensors inside a vector,
with the same length as the inputs.
example for more details.
Module controlling models downloadable from ONNX Model Zoom
Module containing environment types
Module containing error definitions.
Module containing session types
Module containing tensor types.
Optimization level performed by ONNX Runtime of the loaded graph
Logging level of the ONNX Runtime C API
Enum mapping ONNX Runtime’s supported tensor types