onnxruntime 0.0.4

Wrapper around Microsoft's ONNX Runtime
docs.rs failed to build onnxruntime-0.0.4
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Visit the last successful build: onnxruntime-0.0.14

ONNX Runtime

This crate is a (safe) wrapper around Microsoft's ONNX Runtime through its C API.

From its GitHub page:

ONNX Runtime is a cross-platform, high performance ML inferencing and training accelerator.

The (highly) unsafe C API is wrapped using bindgen as onnxruntime-sys.

The unsafe bindings are wrapped in this crate to expose a safe API.

For now, efforts are concentrated on the inference API. Training is not supported.

Example

The C++ example that uses the C API (C_Api_Sample.cpp) was ported to onnxruntime.

First, an environment must be created using and EnvBuilder:

# use std::error::Error;
# fn main() -> Result<(), Box<dyn Error>> {
let env = EnvBuilder::new()
.with_name("test")
.with_log_level(LoggingLevel::Verbose)
.build()?;
# Ok(())
# }

Then a Session is created from the environment, some options and an ONNX archive:

# use std::error::Error;
# fn main() -> Result<(), Box<dyn Error>> {
# let env = EnvBuilder::new()
#     .with_name("test")
#     .with_log_level(LoggingLevel::Verbose)
#     .build()?;
let mut session = env
.new_session_builder()?
.with_optimization_level(GraphOptimizationLevel::Basic)?
.with_number_threads(1)?
.load_model_from_file("squeezenet.onnx")?;
# Ok(())
# }

Inference will be run on data passed as an ndarray::Array.

# use std::error::Error;
# fn main() -> Result<(), Box<dyn Error>> {
# let env = EnvBuilder::new()
#     .with_name("test")
#     .with_log_level(LoggingLevel::Verbose)
#     .build()?;
# let mut session = env
#     .new_session_builder()?
#     .with_optimization_level(GraphOptimizationLevel::Basic)?
#     .with_number_threads(1)?
#     .load_model_from_file("squeezenet.onnx")?;
let array = ndarray::Array::linspace(0.0_f32, 1.0, 100)?;
// Multiple inputs and outputs are possible
let input_tensor = vec![array];
let outputs: Vec<TensorFromOrt> = session.run(input_tensor)?;
# Ok(())
# }

The outputs are of type TensorFromOrts inside a vector, with the same length as the inputs.

See the sample.rs example for more details.