Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
This crate provides an idiomatic Rust API for TVM.
The code works on Stable Rust and is tested against
You can find the API Documentation here.
What Does This Crate Offer?
The goal of this crate is to provide bindings to both the TVM compiler and runtime APIs. First train your Deep Learning model using any major framework such as PyTorch, Apache MXNet or TensorFlow. Then use TVM to build and deploy optimized model artifacts on a supported devices such as CPU, GPU, OpenCL and specialized accelerators.
The Rust bindings are composed of a few crates:
- The tvm crate which exposes Rust bindings to both the compiler and runtime.
- The tvm_macros crate which provides macros which generate unsafe boilerplate for TVM's data structures.
- The tvm_rt crate which exposes Rust bindings to the TVM runtime APIs.
- The [tvm_sys] crate which provides raw bindings and linkage to the TVM C++ library.
- The [tvm_graph_rt] crate which implements a version of the TVM graph runtime in Rust vs. C++.
These crates have been recently refactored and reflect a much different philosophy than previous bindings, as well as much increased support for more of the TVM API including exposing all of the compiler internals.
These are still very much in development and should not be considered stable, but contributions and usage is welcome and encouraged. If you want to discuss design issues check our Discourse forum and for bug reports check our GitHub repository.
Please follow the TVM install instructions,
export TVM_HOME=/path/to/tvm and add
libtvm_runtime to your
Note: To run the end-to-end examples and tests,
topi need to be added to your
PYTHONPATH or it's automatic via an Anaconda environment when it is installed individually.