executorch_sys/
lib.rs

1#![cfg_attr(deny_warnings, deny(warnings))]
2// some new clippy::lint annotations are supported in latest Rust but not recognized by older versions
3#![cfg_attr(deny_warnings, allow(unknown_lints))]
4#![cfg_attr(deny_warnings, deny(missing_docs))]
5#![cfg_attr(docsrs, feature(doc_cfg))]
6
7//! Unsafe bindings for ExecuTorch - On-device AI across mobile, embedded and edge for PyTorch.
8//!
9//! Provides a low level Rust bindings for the ExecuTorch library.
10//! For the common use case, it is recommended to use the high-level API provided by the `executorch` crate, where
11//! a more detailed documentation can be found.
12//!
13//!
14//! To build the library, you need to build the C++ library yourself first.
15//! Currently the supported Cpp executorch version is `1.0.1`  (or `1.0.0`).
16//! The C++ library allow for great flexibility with many flags, customizing which modules, kernels, and extensions are
17//! built.
18//! Multiple static libraries are built, and the Rust library links to them.
19//! In the following example we build the C++ library with the necessary flags to run example `hello_world`:
20//! ```bash
21//! # Clone the C++ library
22//! cd ${EXECUTORCH_CPP_DIR}
23//! git clone --depth 1 --branch v1.0.1 https://github.com/pytorch/executorch.git .
24//! git submodule sync --recursive
25//! git submodule update --init --recursive
26//!
27//! # Install requirements
28//! ./install_requirements.sh
29//!
30//! # Build C++ library
31//! mkdir cmake-out && cd cmake-out
32//! cmake \
33//!     -DDEXECUTORCH_SELECT_OPS_LIST=aten::add.out \
34//!     -DEXECUTORCH_BUILD_EXECUTOR_RUNNER=OFF \
35//!     -DEXECUTORCH_BUILD_EXTENSION_RUNNER_UTIL=OFF \
36//!     -DEXECUTORCH_BUILD_PORTABLE_OPS=ON \
37//!     -DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
38//!     -DEXECUTORCH_BUILD_EXTENSION_FLAT_TENSOR=ON \
39//!     -DEXECUTORCH_BUILD_EXTENSION_NAMED_DATA_MAP=ON \
40//!     -DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
41//!     -DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \
42//!     -DEXECUTORCH_ENABLE_PROGRAM_VERIFICATION=ON \
43//!     -DEXECUTORCH_ENABLE_LOGGING=ON \
44//!     ..
45//! make -j
46//!
47//! # Run example
48//! # We set EXECUTORCH_RS_EXECUTORCH_LIB_DIR to the path of the C++ build output
49//! cd ${EXECUTORCH_RS_DIR}/examples/hello_world
50//! python export_model.py
51//! EXECUTORCH_RS_EXECUTORCH_LIB_DIR=${EXECUTORCH_CPP_DIR}/cmake-out cargo run
52//! ```
53//!
54//! The `executorch` crate will always look for the following static libraries:
55//! - `libexecutorch.a`
56//! - `libexecutorch_core.a`
57//!
58//! Additional libs are required if feature flags are enabled.
59//! For example the `libextension_data_loader.a` is required if the `data-loader` feature is enabled,
60//! and `libextension_tensor.a` is required if the `tensor-ptr` feature is enabled.
61//! See the feature flags section for more info.
62//!
63//! The static libraries of the kernels implementations are required only if your model uses them, and they should be
64//! **linked manually** by the binary that uses the `executorch` crate.
65//! For example, the `hello_world` example uses a model with a single addition operation, so it compile the C++
66//! library with `DEXECUTORCH_SELECT_OPS_LIST=aten::add.out` and contain the following lines in its `build.rs`:
67//! ```rust
68//! println!("cargo::rustc-link-lib=static:+whole-archive=portable_kernels");
69//! println!("cargo::rustc-link-lib=static:+whole-archive=portable_ops_lib");
70//!
71//! let libs_dir = std::env::var("EXECUTORCH_RS_EXECUTORCH_LIB_DIR").unwrap();
72//! println!("cargo::rustc-link-search=native={libs_dir}/kernels/portable/");
73//! ```
74//! Note that the ops and kernels libs are linked with `+whole-archive` to ensure that all symbols are included in the
75//! binary.
76//!
77//! The `EXECUTORCH_RS_EXECUTORCH_LIB_DIR` environment variable should be set to the path of the C++ build output.
78//! If its not provided, its the responsibility of the binary to add the libs directories to the linker search path, and
79//! the crate will just link to the static libraries using `cargo::rustc-link-lib=...`.
80//!
81//! If you want to link to executorch libs yourself, set the environment variable `EXECUTORCH_RS_LINK` to `0`, and
82//! the crate will not link to any library and not modify the linker search path.
83//!
84//! The crate contains a small C/C++ bridge that uses the headers of the C++ library,
85//! and it is compiled using the `cc` crate (and the `cxx` crate, that uses `cc` under the hood).
86//! If custom compiler flags (for example `-DET_MIN_LOG_LEVEL=Debug`) are used when compiling the C++ library,
87//! you should set the matching environment variables that `cc` reads during `cargo build`
88//! (for example `CFLAGS=-DET_MIN_LOG_LEVEL=Debug CXXFLAGS=-DET_MIN_LOG_LEVEL=Debug`),
89//! see the [cc docs](https://docs.rs/cc/latest/cc/).
90//!
91//!
92//! ## Cargo Features
93//! By default the `std` feature is enabled.
94//! - `data-loader`:
95//!   Includes the [`FileDataLoader`] and [`MmapDataLoader`] structs. Without this feature the only available
96//!   data loader is [`BufferDataLoader`]. The `libextension_data_loader.a` static library is required, compile C++
97//!   `executorch` with `EXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON`.
98//! - `module`:
99//!   Includes the `Module` struct, a high-level API for loading and executing PyTorch models. It is an alternative to
100//!   the lower-level `Program` API, which is more suitable for embedded systems.
101//!   The `libextension_module_static.a` static library is required, compile C++ `executorch` with
102//!   `EXECUTORCH_BUILD_EXTENSION_MODULE=ON`.
103//!   Also includes the `std`, `data-loader` and `flat-tensor` features.
104//! - `tensor-ptr`:
105//!   Includes a few functions creating `cxx::SharedPtr<Tensor>` pointers, that manage the lifetime of the tensor
106//!   object alongside the lifetimes of the data buffer and additional metadata. The `libextension_tensor.a`
107//!   static library is required, compile C++ `executorch` with `EXECUTORCH_BUILD_EXTENSION_TENSOR=ON`.
108//!   Also includes the `std` feature.
109//! - `flat-tensor`:
110//!   Includes the `FlatTensorDataMap` struct that can read `.ptd` files with external tensors for models.
111//!   The `libextension_flat_tensor.a` static library is required,
112//!   compile C++ `executorch` with `EXECUTORCH_BUILD_EXTENSION_FLAT_TENSOR=ON`.
113//! - `etdump`:
114//!   Includes the `ETDumpGen` struct, an implementation of an `EventTracer`, used for debugging and profiling.
115//!   The `libetdump.a` static library is required, compile C++ `executorch` with `EXECUTORCH_BUILD_DEVTOOLS=ON` and
116//!   `EXECUTORCH_ENABLE_EVENT_TRACER=ON`.
117//!   In addition, the `flatcc` (or `flatcc_d`) library is required, available at `{CMAKE_DIR}/third-party/flatcc_ep/lib/`,
118//!   and should be linked by the user.
119//! - `std`:
120//!   Enable the standard library. This feature is enabled by default, but can be disabled to build `executorch` in a `no_std` environment.
121//!   NOTE: no_std is still WIP, see <https://github.com/pytorch/executorch/issues/4561>
122//!
123//! [`FileDataLoader`]: crate::FileDataLoader
124//! [`MmapDataLoader`]: crate::MmapDataLoader
125//! [`BufferDataLoader`]: crate::BufferDataLoader
126//! [`Module`]: crate::Module
127
128#![cfg_attr(not(feature = "std"), no_std)]
129
130#[cfg(not(feature = "std"))]
131extern crate core as std;
132
133#[cfg(all(feature = "std", link_cxx))]
134extern crate link_cplusplus;
135
136/// The version of the ExecuTorch C++ library that this crate is compatible and linked with.
137pub const EXECUTORCH_CPP_VERSION: &str = "1.0.1";
138
139mod c_bridge;
140pub use c_bridge::*;
141
142#[cfg(feature = "std")]
143mod cxx_bridge;
144#[cfg(feature = "std")]
145pub use cxx_bridge::*;
146
147/// Utility functions and structs.
148pub mod util {
149    #[cfg(feature = "tensor-ptr")]
150    pub use super::cxx_bridge::tensor_ptr::cxx_util::*;
151}
152
153// Re-export cxx
154#[cfg(feature = "std")]
155pub use cxx;