Crate executorch_sys

Crate executorch_sys 

Source
Expand description

Unsafe bindings for ExecuTorch - On-device AI across mobile, embedded and edge for PyTorch.

Provides a low level Rust bindings for the ExecuTorch library. For the common use case, it is recommended to use the high-level API provided by the executorch crate, where a more detailed documentation can be found.

To build the library, you need to build the C++ library yourself first. Currently the supported Cpp executorch version is 1.0.1 (or 1.0.0). The C++ library allow for great flexibility with many flags, customizing which modules, kernels, and extensions are built. Multiple static libraries are built, and the Rust library links to them. In the following example we build the C++ library with the necessary flags to run example hello_world:

# Clone the C++ library
cd ${EXECUTORCH_CPP_DIR}
git clone --depth 1 --branch v1.0.1 https://github.com/pytorch/executorch.git .
git submodule sync --recursive
git submodule update --init --recursive

# Install requirements
./install_requirements.sh

# Build C++ library
mkdir cmake-out && cd cmake-out
cmake \
    -DDEXECUTORCH_SELECT_OPS_LIST=aten::add.out \
    -DEXECUTORCH_BUILD_EXECUTOR_RUNNER=OFF \
    -DEXECUTORCH_BUILD_EXTENSION_RUNNER_UTIL=OFF \
    -DEXECUTORCH_BUILD_PORTABLE_OPS=ON \
    -DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
    -DEXECUTORCH_BUILD_EXTENSION_FLAT_TENSOR=ON \
    -DEXECUTORCH_BUILD_EXTENSION_NAMED_DATA_MAP=ON \
    -DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
    -DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \
    -DEXECUTORCH_ENABLE_PROGRAM_VERIFICATION=ON \
    -DEXECUTORCH_ENABLE_LOGGING=ON \
    ..
make -j

# Run example
# We set EXECUTORCH_RS_EXECUTORCH_LIB_DIR to the path of the C++ build output
cd ${EXECUTORCH_RS_DIR}/examples/hello_world
python export_model.py
EXECUTORCH_RS_EXECUTORCH_LIB_DIR=${EXECUTORCH_CPP_DIR}/cmake-out cargo run

The executorch crate will always look for the following static libraries:

  • libexecutorch.a
  • libexecutorch_core.a

Additional libs are required if feature flags are enabled. For example the libextension_data_loader.a is required if the data-loader feature is enabled, and libextension_tensor.a is required if the tensor-ptr feature is enabled. See the feature flags section for more info.

The static libraries of the kernels implementations are required only if your model uses them, and they should be linked manually by the binary that uses the executorch crate. For example, the hello_world example uses a model with a single addition operation, so it compile the C++ library with DEXECUTORCH_SELECT_OPS_LIST=aten::add.out and contain the following lines in its build.rs:

println!("cargo::rustc-link-lib=static:+whole-archive=portable_kernels");
println!("cargo::rustc-link-lib=static:+whole-archive=portable_ops_lib");

let libs_dir = std::env::var("EXECUTORCH_RS_EXECUTORCH_LIB_DIR").unwrap();
println!("cargo::rustc-link-search=native={libs_dir}/kernels/portable/");

Note that the ops and kernels libs are linked with +whole-archive to ensure that all symbols are included in the binary.

The EXECUTORCH_RS_EXECUTORCH_LIB_DIR environment variable should be set to the path of the C++ build output. If its not provided, its the responsibility of the binary to add the libs directories to the linker search path, and the crate will just link to the static libraries using cargo::rustc-link-lib=....

If you want to link to executorch libs yourself, set the environment variable EXECUTORCH_RS_LINK to 0, and the crate will not link to any library and not modify the linker search path.

The crate contains a small C/C++ bridge that uses the headers of the C++ library, and it is compiled using the cc crate (and the cxx crate, that uses cc under the hood). If custom compiler flags (for example -DET_MIN_LOG_LEVEL=Debug) are used when compiling the C++ library, you should set the matching environment variables that cc reads during cargo build (for example CFLAGS=-DET_MIN_LOG_LEVEL=Debug CXXFLAGS=-DET_MIN_LOG_LEVEL=Debug), see the cc docs.

§Cargo Features

By default the std feature is enabled.

  • data-loader: Includes the FileDataLoader and MmapDataLoader structs. Without this feature the only available data loader is BufferDataLoader. The libextension_data_loader.a static library is required, compile C++ executorch with EXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON.
  • module: Includes the Module struct, a high-level API for loading and executing PyTorch models. It is an alternative to the lower-level Program API, which is more suitable for embedded systems. The libextension_module_static.a static library is required, compile C++ executorch with EXECUTORCH_BUILD_EXTENSION_MODULE=ON. Also includes the std, data-loader and flat-tensor features.
  • tensor-ptr: Includes a few functions creating cxx::SharedPtr<Tensor> pointers, that manage the lifetime of the tensor object alongside the lifetimes of the data buffer and additional metadata. The libextension_tensor.a static library is required, compile C++ executorch with EXECUTORCH_BUILD_EXTENSION_TENSOR=ON. Also includes the std feature.
  • flat-tensor: Includes the FlatTensorDataMap struct that can read .ptd files with external tensors for models. The libextension_flat_tensor.a static library is required, compile C++ executorch with EXECUTORCH_BUILD_EXTENSION_FLAT_TENSOR=ON.
  • etdump: Includes the ETDumpGen struct, an implementation of an EventTracer, used for debugging and profiling. The libetdump.a static library is required, compile C++ executorch with EXECUTORCH_BUILD_DEVTOOLS=ON and EXECUTORCH_ENABLE_EVENT_TRACER=ON. In addition, the flatcc (or flatcc_d) library is required, available at {CMAKE_DIR}/third-party/flatcc_ep/lib/, and should be linked by the user.
  • std: Enable the standard library. This feature is enabled by default, but can be disabled to build executorch in a no_std environment. NOTE: no_std is still WIP, see https://github.com/pytorch/executorch/issues/4561

Re-exports§

pub use cxx;std

Modules§

util
Utility functions and structs.

Structs§

ArrayRefBool
ArrayRefChar
ArrayRefDimOrderType
ArrayRefEValue
ArrayRefEValuePtr
ArrayRefF64
ArrayRefI32
ArrayRefI64
ArrayRefOptionalTensor
ArrayRefSizesType
ArrayRefStridesType
ArrayRefTensor
ArrayRefU8
ArrayRefUsizeType
BoxedEvalueListI64
BoxedEvalueListOptionalTensor
BoxedEvalueListTensor
BufferDataLoader
DataLoaderRefMut
ETDumpGen
EValueRef
EValueRefMut
EValueStorage
EventTracermodule
EventTracer is a class that users can inherit and implement to log/serialize/stream etc.
EventTracerRefMut
ExecutorchPalImpl
FileDataLoader
FlatTensorDataMap
HierarchicalAllocator
MallocMemoryAllocator
Dynamically allocates memory using malloc() and frees all pointers at destruction time.
MemoryAllocator
MemoryManager
Method
MethodMeta
MmapDataLoader
Modulemodule
A facade class for loading programs and executing methods within them.
NamedDataMapRef
NamedDataMapRefMut
OptionalTensorRef
OptionalTensorRefMut
OptionalTensorStorage
Program
SpanI64
SpanOptionalTensor
SpanSpanU8
SpanTensor
SpanU8
Tensortensor-ptr
A minimal Tensor type whose API is a source compatible subset of at::Tensor.
TensorImpl
TensorInfo
TensorLayout
TensorRef
TensorRefMut
TensorStorage
VecChar
VecEValue
VecVecChar
executorch_tick_ratio
Represents the conversion ratio from system ticks to nanoseconds. To convert, use nanoseconds = ticks * numerator / denominator.

Enums§

Error
ExecuTorch Error type.
MmapDataLoaderMlockConfig
Describes how and whether to lock loaded pages with mlock().
ModuleLoadMode
Enum to define loading behavior.
ProgramHeaderStatus
Describes the presence of an ExecuTorch program header.
ProgramVerification
Types of validation that the Program can do before parsing the data.
ScalarType
Tag
TensorShapeDynamism
The resizing capabilities of a Tensor.
executorch_pal_log_level
Severity level of a log message. Values must map to printable 7-bit ASCII uppercase letters.

Constants§

EXECUTORCH_CPP_VERSION
The version of the ExecuTorch C++ library that this crate is compatible and linked with.

Functions§

MallocMemoryAllocator_as_memory_allocator
Get a pointer to the base class MemoryAllocator.
MallocMemoryAllocator_new
Construct a new Malloc memory allocator.
Module_executemodule
Execute a specific method with the given input values and retrieve the output values. Loads the program and method before executing if needed.
Module_is_loadedmodule
Checks if the program is loaded.
Module_is_method_loadedmodule
Checks if a specific method is loaded.
Module_loadmodule
Load the program if needed.
Module_load_methodmodule
Load a specific method from the program and set up memory management if needed.
Module_method_metamodule
Get a method metadata struct by method name.
Module_method_namesmodule
Get a list of method names available in the loaded program.
Module_newmodule
Constructs an instance by loading a program from a file with specified memory locking behavior.
Module_num_methodsmodule
Get the number of methods available in the loaded program.
Module_unload_methodmodule
Unload a specific method from the program.
TensorPtr_newtensor-ptr
Create a new tensor pointer.
executorch_BufferDataLoader_as_data_loader_mut
executorch_BufferDataLoader_new
executorch_ETDumpGen_as_event_tracer_mut
executorch_ETDumpGen_get_etdump_data
executorch_ETDumpGen_new
executorch_EValue_as_bool
executorch_EValue_as_bool_list
executorch_EValue_as_f64
executorch_EValue_as_f64_list
executorch_EValue_as_i64
executorch_EValue_as_i64_list
executorch_EValue_as_optional_tensor_list
executorch_EValue_as_string
executorch_EValue_as_tensor
executorch_EValue_as_tensor_list
executorch_EValue_copy
executorch_EValue_destructor
executorch_EValue_move
executorch_EValue_new_from_bool
executorch_EValue_new_from_bool_list
executorch_EValue_new_from_f64
executorch_EValue_new_from_f64_list
executorch_EValue_new_from_i64
executorch_EValue_new_from_i64_list
executorch_EValue_new_from_optional_tensor_list
executorch_EValue_new_from_string
executorch_EValue_new_from_tensor
executorch_EValue_new_from_tensor_list
executorch_EValue_new_none
executorch_EValue_tag
executorch_FileDataLoader_as_data_loader_mut
executorch_FileDataLoader_destructor
executorch_FileDataLoader_new
executorch_FlatTensorDataMap_as_named_data_map_mut
executorch_FlatTensorDataMap_load
executorch_HierarchicalAllocator_destructor
executorch_HierarchicalAllocator_new
executorch_MemoryAllocator_allocate
executorch_MemoryAllocator_new
executorch_MemoryManager_new
executorch_MethodMeta_attribute_tensor_meta
executorch_MethodMeta_get_backend_name
executorch_MethodMeta_input_tag
executorch_MethodMeta_input_tensor_meta
executorch_MethodMeta_memory_planned_buffer_size
executorch_MethodMeta_name
executorch_MethodMeta_num_attributes
executorch_MethodMeta_num_backends
executorch_MethodMeta_num_inputs
executorch_MethodMeta_num_memory_planned_buffers
executorch_MethodMeta_num_outputs
executorch_MethodMeta_output_tag
executorch_MethodMeta_output_tensor_meta
executorch_MethodMeta_uses_backend
executorch_Method_destructor
executorch_Method_execute
executorch_Method_get_attribute
executorch_Method_get_output
executorch_Method_inputs_size
executorch_Method_outputs_size
executorch_Method_set_input
executorch_MmapDataLoader_as_data_loader_mut
executorch_MmapDataLoader_destructor
executorch_MmapDataLoader_new
executorch_NamedDataMap_get_key
executorch_NamedDataMap_get_num_keys
executorch_NamedDataMap_get_tensor_layout
executorch_OptionalTensor_get
executorch_Program_check_header
executorch_Program_destructor
executorch_Program_get_method_name
executorch_Program_get_named_data_map
executorch_Program_load
executorch_Program_load_method
executorch_Program_method_meta
executorch_Program_num_methods
executorch_TensorImpl_new
executorch_TensorInfo_dim_order
executorch_TensorInfo_is_memory_planned
executorch_TensorInfo_name
executorch_TensorInfo_nbytes
executorch_TensorInfo_scalar_type
executorch_TensorInfo_sizes
executorch_TensorLayout_dim_order
executorch_TensorLayout_nbytes
executorch_TensorLayout_scalar_type
executorch_TensorLayout_sizes
executorch_Tensor_const_data_ptr
executorch_Tensor_coordinate_to_index
executorch_Tensor_coordinate_to_index_unchecked
executorch_Tensor_destructor
executorch_Tensor_dim
executorch_Tensor_dim_order
executorch_Tensor_element_size
executorch_Tensor_mutable_data_ptr
executorch_Tensor_nbytes
executorch_Tensor_new
executorch_Tensor_numel
executorch_Tensor_scalar_type
executorch_Tensor_size
executorch_Tensor_sizes
executorch_Tensor_strides
executorch_VecChar_destructor
executorch_VecEValue_destructor
executorch_VecVecChar_destructor
executorch_get_pal_impl
Returns the PAL function table, which contains function pointers to the active implementation of each PAL function.
executorch_is_valid_dim_order_and_strides
executorch_pal_abort
Immediately abort execution, setting the device into an error state, if available.
executorch_pal_allocate
NOTE: Core runtime code must not call this directly. It may only be called by a MemoryAllocator wrapper.
executorch_pal_current_ticks
Return a monotonically non-decreasing timestamp in system ticks.
executorch_pal_emit_log_message
Severity level of a log message. Values must map to printable 7-bit ASCII uppercase letters.
executorch_pal_free
Frees memory allocated by et_pal_allocate().
executorch_pal_init
Initialize the platform abstraction layer.
executorch_pal_ticks_to_ns_multiplier
Return the conversion rate from system ticks to nanoseconds as a fraction. To convert a system ticks to nanoseconds, multiply the tick count by the numerator and then divide by the denominator: nanoseconds = ticks * numerator / denominator
executorch_register_pal
Override the PAL functions with user implementations. Any null entries in the table are unchanged and will keep the default implementation.
executorch_stride_to_dim_order

Type Aliases§

DimOrderType
The type used for elements of Tensor.dim_order().
SizesType
The type used for elements of Tensor.sizes().
StridesType
The type used for elements of Tensor.strides().
executorch_timestamp_t
Platform timestamp in system ticks.