Crate executorch_sys

Source
Expand description

Unsafe bindings for ExecuTorch - On-device AI across mobile, embedded and edge for PyTorch.

Provides a low level Rust bindings for the ExecuTorch library. For the common use case, it is recommended to use the high-level API provided by the executorch crate, where a more detailed documentation can be found.

To build the library, you need to build the C++ library first. The C++ library allow for great flexibility with many flags, customizing which modules, kernels, and extensions are built. Multiple static libraries are built, and the Rust library links to them. In the following example we build the C++ library with the necessary flags to run example hello_world:

# Clone the C++ library
cd ${EXECUTORCH_CPP_DIR}
git clone --depth 1 --branch v0.6.0 https://github.com/pytorch/executorch.git .
git submodule sync --recursive
git submodule update --init --recursive

# Install requirements
./install_requirements.sh

# Build C++ library
mkdir cmake-out && cd cmake-out
cmake \
    -DDEXECUTORCH_SELECT_OPS_LIST=aten::add.out \
    -DEXECUTORCH_BUILD_EXECUTOR_RUNNER=OFF \
    -DEXECUTORCH_BUILD_EXTENSION_RUNNER_UTIL=OFF \
    -DBUILD_EXECUTORCH_PORTABLE_OPS=ON \
    -DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
    -DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
    -DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \
    -DEXECUTORCH_ENABLE_PROGRAM_VERIFICATION=ON \
    -DEXECUTORCH_ENABLE_LOGGING=ON \
    ..
make -j

# Static libraries are in cmake-out/
# core:
#   cmake-out/libexecutorch.a
#   cmake-out/libexecutorch_core.a
# kernels implementations:
#   cmake-out/kernels/portable/libportable_ops_lib.a
#   cmake-out/kernels/portable/libportable_kernels.a
# extension data loader, enabled with EXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON:
#   cmake-out/extension/data_loader/libextension_data_loader.a
# extension module, enabled with EXECUTORCH_BUILD_EXTENSION_MODULE=ON:
#   cmake-out/extension/module/libextension_module_static.a
# extension tensor, enabled with EXECUTORCH_BUILD_EXTENSION_TENSOR=ON:
#   cmake-out/extension/tensor/libextension_tensor.a
# extension tensor, enabled with EXECUTORCH_BUILD_DEVTOOLS=ON:
#   cmake-out/devtools/libetdump.a

# Run example
# We set EXECUTORCH_RS_EXECUTORCH_LIB_DIR to the path of the C++ build output
cd ${EXECUTORCH_RS_DIR}/examples/hello_world
python export_model.py
EXECUTORCH_RS_EXECUTORCH_LIB_DIR=${EXECUTORCH_CPP_DIR}/cmake-out cargo run

The executorch crate will always look for the following static libraries:

  • libexecutorch.a
  • libexecutorch_core.a

Additional libs are required if feature flags are enabled (see next section):

  • libextension_data_loader.a
  • libextension_module_static.a
  • libextension_tensor.a
  • libetdump.a

The static libraries of the kernels implementations are required only if your model uses them, and they should be linked manually by the binary that uses the executorch crate. For example, the hello_world example uses a model with a single addition operation, so it compile the C++ library with DEXECUTORCH_SELECT_OPS_LIST=aten::add.out and contain the following lines in its build.rs:

println!("cargo::rustc-link-lib=static:+whole-archive=portable_kernels");
println!("cargo::rustc-link-lib=static:+whole-archive=portable_ops_lib");

let libs_dir = std::env::var("EXECUTORCH_RS_EXECUTORCH_LIB_DIR").unwrap();
println!("cargo::rustc-link-search=native={libs_dir}/kernels/portable/");

Note that the ops and kernels libs are linked with +whole-archive to ensure that all symbols are included in the binary.

The EXECUTORCH_RS_EXECUTORCH_LIB_DIR environment variable should be set to the path of the C++ build output. If its not provided, its the resposibility of the binary to add the libs directories to the linker search path, and the crate will just link to the static libraries using cargo::rustc-link-lib=....

If you want to link to executorch libs yourself, set the environment variable EXECUTORCH_RS_LINK to 0, and the crate will not link to any library and not modify the linker search path.

§Cargo Features

By default the std feature is enabled.

  • data-loader: Includes the FileDataLoader and MmapDataLoader structs. Without this feature the only available data loader is BufferDataLoader. The libextension_data_loader.a static library is required, compile C++ executorch with EXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON.
  • module: Includes the Module struct. The libextension_module_static.a static library is required, compile C++ executorch with EXECUTORCH_BUILD_EXTENSION_MODULE=ON.
  • tensor-ptr: Includes a few functions creating cxx::SharedPtr<Tensor> pointers, that manage the lifetime of the tensor object alongside the lifetimes of the data buffer and additional metadata. The libextension_tensor.a static library is required, compile C++ executorch with EXECUTORCH_BUILD_EXTENSION_TENSOR=ON. Also includes the std feature.
  • etdump Includes the ETDumpGen struct, an implementation of an EventTracer, used for debugging and profiling. The libetdump.a static library is required, compile C++ executorch with EXECUTORCH_BUILD_DEVTOOLS=ON and EXECUTORCH_ENABLE_EVENT_TRACER=ON. In addition, the flatcc (or flatcc_d) library is required, available at {CPP_EXECUTORCH_DIR}/third-party/flatcc/lib/, and should be linked by the user.
  • std: Enable the standard library. This feature is enabled by default, but can be disabled to build executorch in a no_std environment. NOTE: no_std is still WIP, see https://github.com/pytorch/executorch/issues/4561

Re-exports§

pub use cxx;std

Modules§

cppstd
Bindings generated by the cxx crate.

Structs§

ArrayRefBool
ArrayRefChar
ArrayRefDimOrderType
ArrayRefEValue
ArrayRefEValuePtr
ArrayRefF64
ArrayRefI32
ArrayRefI64
ArrayRefOptionalTensor
ArrayRefSizesType
ArrayRefStridesType
ArrayRefTensor
ArrayRefU8
ArrayRefUsizeType
BoxedEvalueListI64
BoxedEvalueListOptionalTensor
BoxedEvalueListTensor
BufferDataLoader
DataLoaderRefMut
ETDumpGen
EValueRef
EValueRefMut
EValueStorage
EventTracerRefMut
FileDataLoader
HierarchicalAllocator
MemoryAllocator
MemoryManager
Method
MethodMeta
MmapDataLoader
OptionalTensorRef
OptionalTensorRefMut
OptionalTensorStorage
Program
Program__bindgen_ty_1
SpanI64
SpanOptionalTensor
SpanSpanU8
SpanTensor
SpanU8
TensorImpl
TensorInfo
TensorRef
TensorRefMut
TensorStorage
VecChar
VecEValue
VecVecChar

Enums§

Error
ExecuTorch Error type.
MmapDataLoaderMlockConfig
Describes how and whether to lock loaded pages with mlock().
ModuleLoadMode
Enum to define loading behavior.
ProgramHeaderStatus
Describes the presence of an ExecuTorch program header.
ProgramVerification
Types of validation that the Program can do before parsing the data.
ScalarType
Tag
TensorShapeDynamism
The resizing capabilities of a Tensor.

Constants§

EXECUTORCH_CPP_VERSION
The version of the ExecuTorch C++ library that this crate is compatible and linked with.

Functions§

executorch_BufferDataLoader_as_data_loader_mut
executorch_BufferDataLoader_new
executorch_ETDumpGen_as_event_tracer_mut
executorch_ETDumpGen_get_etdump_data
executorch_ETDumpGen_new
executorch_EValue_as_bool
executorch_EValue_as_bool_list
executorch_EValue_as_f64
executorch_EValue_as_f64_list
executorch_EValue_as_i64
executorch_EValue_as_i64_list
executorch_EValue_as_optional_tensor_list
executorch_EValue_as_string
executorch_EValue_as_tensor
executorch_EValue_as_tensor_list
executorch_EValue_copy
executorch_EValue_destructor
executorch_EValue_move
executorch_EValue_new_from_bool
executorch_EValue_new_from_bool_list
executorch_EValue_new_from_f64
executorch_EValue_new_from_f64_list
executorch_EValue_new_from_i64
executorch_EValue_new_from_i64_list
executorch_EValue_new_from_optional_tensor_list
executorch_EValue_new_from_string
executorch_EValue_new_from_tensor
executorch_EValue_new_from_tensor_list
executorch_EValue_new_none
executorch_EValue_tag
executorch_FileDataLoader_as_data_loader_mut
executorch_FileDataLoader_destructor
executorch_FileDataLoader_new
executorch_HierarchicalAllocator_destructor
executorch_HierarchicalAllocator_new
executorch_MemoryAllocator_allocate
executorch_MemoryAllocator_new
executorch_MemoryManager_new
executorch_MethodMeta_get_backend_name
executorch_MethodMeta_input_tag
executorch_MethodMeta_input_tensor_meta
executorch_MethodMeta_memory_planned_buffer_size
executorch_MethodMeta_name
executorch_MethodMeta_num_backends
executorch_MethodMeta_num_inputs
executorch_MethodMeta_num_memory_planned_buffers
executorch_MethodMeta_num_outputs
executorch_MethodMeta_output_tag
executorch_MethodMeta_output_tensor_meta
executorch_MethodMeta_uses_backend
executorch_Method_destructor
executorch_Method_execute
executorch_Method_get_output
executorch_Method_inputs_size
executorch_Method_outputs_size
executorch_Method_set_input
executorch_MmapDataLoader_as_data_loader_mut
executorch_MmapDataLoader_destructor
executorch_MmapDataLoader_new
executorch_OptionalTensor_get
executorch_Program_check_header
executorch_Program_destructor
executorch_Program_get_method_name
executorch_Program_load
executorch_Program_load_method
executorch_Program_method_meta
executorch_Program_num_methods
executorch_TensorImpl_new
executorch_TensorInfo_dim_order
executorch_TensorInfo_nbytes
executorch_TensorInfo_scalar_type
executorch_TensorInfo_sizes
executorch_Tensor_const_data_ptr
executorch_Tensor_coordinate_to_index
executorch_Tensor_coordinate_to_index_unchecked
executorch_Tensor_destructor
executorch_Tensor_dim
executorch_Tensor_dim_order
executorch_Tensor_element_size
executorch_Tensor_mutable_data_ptr
executorch_Tensor_nbytes
executorch_Tensor_new
executorch_Tensor_numel
executorch_Tensor_scalar_type
executorch_Tensor_size
executorch_Tensor_sizes
executorch_Tensor_strides
executorch_VecChar_destructor
executorch_VecEValue_destructor
executorch_VecVecChar_destructor
executorch_is_valid_dim_order_and_strides
executorch_pal_init
executorch_stride_to_dim_order

Type Aliases§

DimOrderType
The type used for elements of Tensor.dim_order().
SizesType
The type used for elements of Tensor.sizes().
StridesType
The type used for elements of Tensor.strides().

Unions§

OptionalTensorStorage__bindgen_ty_1
Program__bindgen_ty_1__bindgen_ty_1