Expand description
Unsafe bindings for ExecuTorch - On-device AI across mobile, embedded and edge for PyTorch.
Provides a low level Rust bindings for the ExecuTorch library.
For the common use case, it is recommended to use the high-level API provided by the executorch crate, where
a more detailed documentation can be found.
To build the library, you need to build the C++ library yourself first.
Currently the supported Cpp executorch version is 1.0.1 (or 1.0.0).
The C++ library allow for great flexibility with many flags, customizing which modules, kernels, and extensions are
built.
Multiple static libraries are built, and the Rust library links to them.
In the following example we build the C++ library with the necessary flags to run example hello_world:
# Clone the C++ library
cd ${EXECUTORCH_CPP_DIR}
git clone --depth 1 --branch v1.0.1 https://github.com/pytorch/executorch.git .
git submodule sync --recursive
git submodule update --init --recursive
# Install requirements
./install_requirements.sh
# Build C++ library
mkdir cmake-out && cd cmake-out
cmake \
-DDEXECUTORCH_SELECT_OPS_LIST=aten::add.out \
-DEXECUTORCH_BUILD_EXECUTOR_RUNNER=OFF \
-DEXECUTORCH_BUILD_EXTENSION_RUNNER_UTIL=OFF \
-DEXECUTORCH_BUILD_PORTABLE_OPS=ON \
-DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
-DEXECUTORCH_BUILD_EXTENSION_FLAT_TENSOR=ON \
-DEXECUTORCH_BUILD_EXTENSION_NAMED_DATA_MAP=ON \
-DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
-DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \
-DEXECUTORCH_ENABLE_PROGRAM_VERIFICATION=ON \
-DEXECUTORCH_ENABLE_LOGGING=ON \
..
make -j
# Run example
# We set EXECUTORCH_RS_EXECUTORCH_LIB_DIR to the path of the C++ build output
cd ${EXECUTORCH_RS_DIR}/examples/hello_world
python export_model.py
EXECUTORCH_RS_EXECUTORCH_LIB_DIR=${EXECUTORCH_CPP_DIR}/cmake-out cargo runThe executorch crate will always look for the following static libraries:
libexecutorch.alibexecutorch_core.a
Additional libs are required if feature flags are enabled.
For example the libextension_data_loader.a is required if the data-loader feature is enabled,
and libextension_tensor.a is required if the tensor-ptr feature is enabled.
See the feature flags section for more info.
The static libraries of the kernels implementations are required only if your model uses them, and they should be
linked manually by the binary that uses the executorch crate.
For example, the hello_world example uses a model with a single addition operation, so it compile the C++
library with DEXECUTORCH_SELECT_OPS_LIST=aten::add.out and contain the following lines in its build.rs:
println!("cargo::rustc-link-lib=static:+whole-archive=portable_kernels");
println!("cargo::rustc-link-lib=static:+whole-archive=portable_ops_lib");
let libs_dir = std::env::var("EXECUTORCH_RS_EXECUTORCH_LIB_DIR").unwrap();
println!("cargo::rustc-link-search=native={libs_dir}/kernels/portable/");Note that the ops and kernels libs are linked with +whole-archive to ensure that all symbols are included in the
binary.
The EXECUTORCH_RS_EXECUTORCH_LIB_DIR environment variable should be set to the path of the C++ build output.
If its not provided, its the responsibility of the binary to add the libs directories to the linker search path, and
the crate will just link to the static libraries using cargo::rustc-link-lib=....
If you want to link to executorch libs yourself, set the environment variable EXECUTORCH_RS_LINK to 0, and
the crate will not link to any library and not modify the linker search path.
The crate contains a small C/C++ bridge that uses the headers of the C++ library,
and it is compiled using the cc crate (and the cxx crate, that uses cc under the hood).
If custom compiler flags (for example -DET_MIN_LOG_LEVEL=Debug) are used when compiling the C++ library,
you should set the matching environment variables that cc reads during cargo build
(for example CFLAGS=-DET_MIN_LOG_LEVEL=Debug CXXFLAGS=-DET_MIN_LOG_LEVEL=Debug),
see the cc docs.
§Cargo Features
By default the std feature is enabled.
data-loader: Includes theFileDataLoaderandMmapDataLoaderstructs. Without this feature the only available data loader isBufferDataLoader. Thelibextension_data_loader.astatic library is required, compile C++executorchwithEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON.module: Includes theModulestruct, a high-level API for loading and executing PyTorch models. It is an alternative to the lower-levelProgramAPI, which is more suitable for embedded systems. Thelibextension_module_static.astatic library is required, compile C++executorchwithEXECUTORCH_BUILD_EXTENSION_MODULE=ON. Also includes thestd,data-loaderandflat-tensorfeatures.tensor-ptr: Includes a few functions creatingcxx::SharedPtr<Tensor>pointers, that manage the lifetime of the tensor object alongside the lifetimes of the data buffer and additional metadata. Thelibextension_tensor.astatic library is required, compile C++executorchwithEXECUTORCH_BUILD_EXTENSION_TENSOR=ON. Also includes thestdfeature.flat-tensor: Includes theFlatTensorDataMapstruct that can read.ptdfiles with external tensors for models. Thelibextension_flat_tensor.astatic library is required, compile C++executorchwithEXECUTORCH_BUILD_EXTENSION_FLAT_TENSOR=ON.etdump: Includes theETDumpGenstruct, an implementation of anEventTracer, used for debugging and profiling. Thelibetdump.astatic library is required, compile C++executorchwithEXECUTORCH_BUILD_DEVTOOLS=ONandEXECUTORCH_ENABLE_EVENT_TRACER=ON. In addition, theflatcc(orflatcc_d) library is required, available at{CMAKE_DIR}/third-party/flatcc_ep/lib/, and should be linked by the user.std: Enable the standard library. This feature is enabled by default, but can be disabled to buildexecutorchin ano_stdenvironment. NOTE: no_std is still WIP, see https://github.com/pytorch/executorch/issues/4561
Re-exports§
pub use cxx;std
Modules§
- util
- Utility functions and structs.
Structs§
- Array
RefBool - Array
RefChar - Array
RefDim Order Type - Array
RefE Value - Array
RefE Value Ptr - Array
RefF64 - Array
RefI32 - Array
RefI64 - Array
RefOptional Tensor - Array
RefSizes Type - Array
RefStrides Type - Array
RefTensor - Array
RefU8 - Array
RefUsize Type - Boxed
Evalue List I64 - Boxed
Evalue List Optional Tensor - Boxed
Evalue List Tensor - Buffer
Data Loader - Data
Loader RefMut - ETDump
Gen - EValue
Ref - EValue
RefMut - EValue
Storage - Event
Tracer module - EventTracer is a class that users can inherit and implement to log/serialize/stream etc.
- Event
Tracer RefMut - Executorch
PalImpl - File
Data Loader - Flat
Tensor Data Map - Hierarchical
Allocator - Malloc
Memory Allocator - Dynamically allocates memory using malloc() and frees all pointers at destruction time.
- Memory
Allocator - Memory
Manager - Method
- Method
Meta - Mmap
Data Loader - Module
module - A facade class for loading programs and executing methods within them.
- Named
Data MapRef - Named
Data MapRef Mut - Optional
Tensor Ref - Optional
Tensor RefMut - Optional
Tensor Storage - Program
- SpanI64
- Span
Optional Tensor - Span
Span U8 - Span
Tensor - SpanU8
- Tensor
tensor-ptr - A minimal Tensor type whose API is a source compatible subset of at::Tensor.
- Tensor
Impl - Tensor
Info - Tensor
Layout - Tensor
Ref - Tensor
RefMut - Tensor
Storage - VecChar
- VecE
Value - VecVec
Char - executorch_
tick_ ratio - Represents the conversion ratio from system ticks to nanoseconds. To convert, use nanoseconds = ticks * numerator / denominator.
Enums§
- Error
- ExecuTorch Error type.
- Mmap
Data Loader Mlock Config - Describes how and whether to lock loaded pages with
mlock(). - Module
Load Mode - Enum to define loading behavior.
- Program
Header Status - Describes the presence of an ExecuTorch program header.
- Program
Verification - Types of validation that the Program can do before parsing the data.
- Scalar
Type - Tag
- Tensor
Shape Dynamism - The resizing capabilities of a Tensor.
- executorch_
pal_ log_ level - Severity level of a log message. Values must map to printable 7-bit ASCII uppercase letters.
Constants§
- EXECUTORCH_
CPP_ VERSION - The version of the ExecuTorch C++ library that this crate is compatible and linked with.
Functions§
- Malloc
Memory ⚠Allocator_ as_ memory_ allocator - Get a pointer to the base class
MemoryAllocator. - Malloc
Memory Allocator_ new - Construct a new Malloc memory allocator.
- Module_
execute ⚠module - Execute a specific method with the given input values and retrieve the output values. Loads the program and method before executing if needed.
- Module_
is_ loaded module - Checks if the program is loaded.
- Module_
is_ method_ loaded module - Checks if a specific method is loaded.
- Module_
load module - Load the program if needed.
- Module_
load_ ⚠method module - Load a specific method from the program and set up memory management if needed.
- Module_
method_ ⚠meta module - Get a method metadata struct by method name.
- Module_
method_ ⚠names module - Get a list of method names available in the loaded program.
- Module_
new module - Constructs an instance by loading a program from a file with specified memory locking behavior.
- Module_
num_ ⚠methods module - Get the number of methods available in the loaded program.
- Module_
unload_ ⚠method module - Unload a specific method from the program.
- Tensor
Ptr_ ⚠new tensor-ptr - Create a new tensor pointer.
- executorch_
Buffer ⚠Data Loader_ as_ data_ loader_ mut - executorch_
Buffer ⚠Data Loader_ new - executorch_
ETDump ⚠Gen_ as_ event_ tracer_ mut - executorch_
ETDump ⚠Gen_ get_ etdump_ data - executorch_
ETDump ⚠Gen_ new - executorch_
EValue_ ⚠as_ bool - executorch_
EValue_ ⚠as_ bool_ list - executorch_
EValue_ ⚠as_ f64 - executorch_
EValue_ ⚠as_ f64_ list - executorch_
EValue_ ⚠as_ i64 - executorch_
EValue_ ⚠as_ i64_ list - executorch_
EValue_ ⚠as_ optional_ tensor_ list - executorch_
EValue_ ⚠as_ string - executorch_
EValue_ ⚠as_ tensor - executorch_
EValue_ ⚠as_ tensor_ list - executorch_
EValue_ ⚠copy - executorch_
EValue_ ⚠destructor - executorch_
EValue_ ⚠move - executorch_
EValue_ ⚠new_ from_ bool - executorch_
EValue_ ⚠new_ from_ bool_ list - executorch_
EValue_ ⚠new_ from_ f64 - executorch_
EValue_ ⚠new_ from_ f64_ list - executorch_
EValue_ ⚠new_ from_ i64 - executorch_
EValue_ ⚠new_ from_ i64_ list - executorch_
EValue_ ⚠new_ from_ optional_ tensor_ list - executorch_
EValue_ ⚠new_ from_ string - executorch_
EValue_ ⚠new_ from_ tensor - executorch_
EValue_ ⚠new_ from_ tensor_ list - executorch_
EValue_ ⚠new_ none - executorch_
EValue_ ⚠tag - executorch_
File ⚠Data Loader_ as_ data_ loader_ mut - executorch_
File ⚠Data Loader_ destructor - executorch_
File ⚠Data Loader_ new - executorch_
Flat ⚠Tensor Data Map_ as_ named_ data_ map_ mut - executorch_
Flat ⚠Tensor Data Map_ load - executorch_
Hierarchical ⚠Allocator_ destructor - executorch_
Hierarchical ⚠Allocator_ new - executorch_
Memory ⚠Allocator_ allocate - executorch_
Memory ⚠Allocator_ new - executorch_
Memory ⚠Manager_ new - executorch_
Method ⚠Meta_ attribute_ tensor_ meta - executorch_
Method ⚠Meta_ get_ backend_ name - executorch_
Method ⚠Meta_ input_ tag - executorch_
Method ⚠Meta_ input_ tensor_ meta - executorch_
Method ⚠Meta_ memory_ planned_ buffer_ size - executorch_
Method ⚠Meta_ name - executorch_
Method ⚠Meta_ num_ attributes - executorch_
Method ⚠Meta_ num_ backends - executorch_
Method ⚠Meta_ num_ inputs - executorch_
Method ⚠Meta_ num_ memory_ planned_ buffers - executorch_
Method ⚠Meta_ num_ outputs - executorch_
Method ⚠Meta_ output_ tag - executorch_
Method ⚠Meta_ output_ tensor_ meta - executorch_
Method ⚠Meta_ uses_ backend - executorch_
Method_ ⚠destructor - executorch_
Method_ ⚠execute - executorch_
Method_ ⚠get_ attribute - executorch_
Method_ ⚠get_ output - executorch_
Method_ ⚠inputs_ size - executorch_
Method_ ⚠outputs_ size - executorch_
Method_ ⚠set_ input - executorch_
Mmap ⚠Data Loader_ as_ data_ loader_ mut - executorch_
Mmap ⚠Data Loader_ destructor - executorch_
Mmap ⚠Data Loader_ new - executorch_
Named ⚠Data Map_ get_ key - executorch_
Named ⚠Data Map_ get_ num_ keys - executorch_
Named ⚠Data Map_ get_ tensor_ layout - executorch_
Optional ⚠Tensor_ get - executorch_
Program_ ⚠check_ header - executorch_
Program_ ⚠destructor - executorch_
Program_ ⚠get_ method_ name - executorch_
Program_ ⚠get_ named_ data_ map - executorch_
Program_ ⚠load - executorch_
Program_ ⚠load_ method - executorch_
Program_ ⚠method_ meta - executorch_
Program_ ⚠num_ methods - executorch_
Tensor ⚠Impl_ new - executorch_
Tensor ⚠Info_ dim_ order - executorch_
Tensor ⚠Info_ is_ memory_ planned - executorch_
Tensor ⚠Info_ name - executorch_
Tensor ⚠Info_ nbytes - executorch_
Tensor ⚠Info_ scalar_ type - executorch_
Tensor ⚠Info_ sizes - executorch_
Tensor ⚠Layout_ dim_ order - executorch_
Tensor ⚠Layout_ nbytes - executorch_
Tensor ⚠Layout_ scalar_ type - executorch_
Tensor ⚠Layout_ sizes - executorch_
Tensor_ ⚠const_ data_ ptr - executorch_
Tensor_ ⚠coordinate_ to_ index - executorch_
Tensor_ ⚠coordinate_ to_ index_ unchecked - executorch_
Tensor_ ⚠destructor - executorch_
Tensor_ ⚠dim - executorch_
Tensor_ ⚠dim_ order - executorch_
Tensor_ ⚠element_ size - executorch_
Tensor_ ⚠mutable_ data_ ptr - executorch_
Tensor_ ⚠nbytes - executorch_
Tensor_ ⚠new - executorch_
Tensor_ ⚠numel - executorch_
Tensor_ ⚠scalar_ type - executorch_
Tensor_ ⚠size - executorch_
Tensor_ ⚠sizes - executorch_
Tensor_ ⚠strides - executorch_
VecChar_ ⚠destructor - executorch_
VecE ⚠Value_ destructor - executorch_
VecVec ⚠Char_ destructor - executorch_
get_ ⚠pal_ impl - Returns the PAL function table, which contains function pointers to the active implementation of each PAL function.
- executorch_
is_ ⚠valid_ dim_ order_ and_ strides - executorch_
pal_ ⚠abort - Immediately abort execution, setting the device into an error state, if available.
- executorch_
pal_ ⚠allocate - NOTE: Core runtime code must not call this directly. It may only be called by a MemoryAllocator wrapper.
- executorch_
pal_ ⚠current_ ticks - Return a monotonically non-decreasing timestamp in system ticks.
- executorch_
pal_ ⚠emit_ log_ message - Severity level of a log message. Values must map to printable 7-bit ASCII uppercase letters.
- executorch_
pal_ ⚠free - Frees memory allocated by et_pal_allocate().
- executorch_
pal_ ⚠init - Initialize the platform abstraction layer.
- executorch_
pal_ ⚠ticks_ to_ ns_ multiplier - Return the conversion rate from system ticks to nanoseconds as a fraction. To convert a system ticks to nanoseconds, multiply the tick count by the numerator and then divide by the denominator: nanoseconds = ticks * numerator / denominator
- executorch_
register_ ⚠pal - Override the PAL functions with user implementations. Any null entries in the table are unchanged and will keep the default implementation.
- executorch_
stride_ ⚠to_ dim_ order
Type Aliases§
- DimOrder
Type - The type used for elements of
Tensor.dim_order(). - Sizes
Type - The type used for elements of
Tensor.sizes(). - Strides
Type - The type used for elements of
Tensor.strides(). - executorch_
timestamp_ t - Platform timestamp in system ticks.