#[repr(C)]pub struct TfLiteContext {Show 25 fields
pub tensors_size: usize,
pub GetExecutionPlan: Option<unsafe extern "C" fn(context: *mut TfLiteContext, execution_plan: *mut *mut TfLiteIntArray) -> TfLiteStatus>,
pub tensors: *mut TfLiteTensor,
pub impl_: *mut c_void,
pub ResizeTensor: Option<unsafe extern "C" fn(arg1: *mut TfLiteContext, tensor: *mut TfLiteTensor, new_size: *mut TfLiteIntArray) -> TfLiteStatus>,
pub ReportError: Option<unsafe extern "C" fn(arg1: *mut TfLiteContext, msg: *const c_char, ...)>,
pub AddTensors: Option<unsafe extern "C" fn(arg1: *mut TfLiteContext, tensors_to_add: c_int, first_new_tensor_index: *mut c_int) -> TfLiteStatus>,
pub GetNodeAndRegistration: Option<unsafe extern "C" fn(arg1: *mut TfLiteContext, node_index: c_int, node: *mut *mut TfLiteNode, registration: *mut *mut TfLiteRegistration) -> TfLiteStatus>,
pub ReplaceNodeSubsetsWithDelegateKernels: Option<unsafe extern "C" fn(arg1: *mut TfLiteContext, registration: TfLiteRegistration, nodes_to_replace: *const TfLiteIntArray, delegate: *mut TfLiteDelegate) -> TfLiteStatus>,
pub recommended_num_threads: c_int,
pub GetExternalContext: Option<unsafe extern "C" fn(arg1: *mut TfLiteContext, arg2: TfLiteExternalContextType) -> *mut TfLiteExternalContext>,
pub SetExternalContext: Option<unsafe extern "C" fn(arg1: *mut TfLiteContext, arg2: TfLiteExternalContextType, arg3: *mut TfLiteExternalContext)>,
pub allow_fp32_relax_to_fp16: bool,
pub profiler: *mut c_void,
pub AllocatePersistentBuffer: Option<unsafe extern "C" fn(ctx: *mut TfLiteContext, bytes: usize) -> *mut c_void>,
pub AllocateBufferForEval: Option<unsafe extern "C" fn(ctx: *mut TfLiteContext, bytes: usize, ptr: *mut *mut c_void) -> TfLiteStatus>,
pub RequestScratchBufferInArena: Option<unsafe extern "C" fn(ctx: *mut TfLiteContext, bytes: usize, buffer_idx: *mut c_int) -> TfLiteStatus>,
pub GetScratchBuffer: Option<unsafe extern "C" fn(ctx: *mut TfLiteContext, buffer_idx: c_int) -> *mut c_void>,
pub ResizeTensorExplicit: Option<unsafe extern "C" fn(ctx: *mut TfLiteContext, tensor: *mut TfLiteTensor, dims: c_int, shape: *const c_int) -> TfLiteStatus>,
pub PreviewDelegatePartitioning: Option<unsafe extern "C" fn(context: *mut TfLiteContext, nodes_to_replace: *const TfLiteIntArray, partition_params_array: *mut *mut TfLiteDelegateParams, num_partitions: *mut c_int) -> TfLiteStatus>,
pub GetTensor: Option<unsafe extern "C" fn(context: *const TfLiteContext, tensor_idx: c_int) -> *mut TfLiteTensor>,
pub GetEvalTensor: Option<unsafe extern "C" fn(context: *const TfLiteContext, tensor_idx: c_int) -> *mut TfLiteEvalTensor>,
pub GetModelMetadata: Option<unsafe extern "C" fn(context: *const TfLiteContext, name: *const c_char, ptr: *mut *const c_char, bytes: *mut usize) -> TfLiteStatus>,
pub AcquireSubgraphContext: Option<unsafe extern "C" fn(context: *mut TfLiteContext, subgraph_index: c_int, acquired_context: *mut *mut TfLiteContext) -> TfLiteStatus>,
pub ReleaseSubgraphContext: Option<unsafe extern "C" fn(context: *mut TfLiteContext, subgraph_index: c_int) -> TfLiteStatus>,
}Expand description
TfLiteContext allows an op to access the tensors.
TfLiteContext is a struct that is created by the TF Lite runtime
and passed to the “methods” (C function pointers) in the
TfLiteRegistration struct that are used to define custom ops and custom
delegate kernels. It contains information and methods (C function pointers)
that can be called by the code implementing a custom op or a custom delegate
kernel. These methods provide access to the context in which that custom op
or custom delegate kernel occurs, such as access to the input and output
tensors for that op, as well as methods for allocating memory buffers
and intermediate tensors, etc.
See also TfLiteOpaqueContext, which is an more ABI-stable equivalent.
Fields§
§tensors_size: usizeNumber of tensors in the context.
GetExecutionPlan: Option<unsafe extern "C" fn(context: *mut TfLiteContext, execution_plan: *mut *mut TfLiteIntArray) -> TfLiteStatus>The execution plan contains a list of the node indices in execution order. execution_plan->size is the current number of nodes. And, execution_plan->data[0] is the first node that needs to be run. TfLiteDelegates can traverse the current execution plan by iterating through each member of this array and using GetNodeAndRegistration() to access details about a node. i.e.
TfLiteIntArray* execution_plan;
TF_LITE_ENSURE_STATUS(context->GetExecutionPlan(context,
&execution_plan));
for (int exec_index = 0; exec_index < execution_plan->size;
exec_index++) {
int node_index = execution_plan->data[exec_index];
TfLiteNode* node;
TfLiteRegistration* reg;
context->GetNodeAndRegistration(context, node_index, &node, ®);
}Note: the memory pointed by ’*execution_plan is OWNED by TfLite runtime.
Future calls to GetExecutionPlan invalidates earlier outputs. The
following code snippet shows the issue of such an invocation pattern.
After calling CheckNode, subsequent access to plan_1st is undefined.
void CheckNode(const TfLiteNode* node) {
...
TfLiteIntArray* plan_2nd;
TF_LITE_ENSURE_STATUS(
context->GetExecutionPlan(context, &plan_2nd)
);
...
}
TfLiteIntArray* plan_1st;
TF_LITE_ENSURE_STATUS(context->GetExecutionPlan(context, &plan_1st));
for (int exec_index = 0; exec_index < plan_1st->size; exec_index++) {
int node_index = plan_1st->data[exec_index];
TfLiteNode* node;
TfLiteRegistration* reg;
context->GetNodeAndRegistration(context, node_index, &node, ®);
CheckNode(node);
}WARNING: This is an experimental interface that is subject to change.
tensors: *mut TfLiteTensorAn array of tensors in the interpreter context (of length tensors_size)
impl_: *mut c_voidopaque full context ptr (an opaque c++ data structure)
ResizeTensor: Option<unsafe extern "C" fn(arg1: *mut TfLiteContext, tensor: *mut TfLiteTensor, new_size: *mut TfLiteIntArray) -> TfLiteStatus>Request memory pointer be resized. Updates dimensions on the tensor. NOTE: ResizeTensor takes ownership of newSize.
ReportError: Option<unsafe extern "C" fn(arg1: *mut TfLiteContext, msg: *const c_char, ...)>Request that an error be reported with format string msg.
AddTensors: Option<unsafe extern "C" fn(arg1: *mut TfLiteContext, tensors_to_add: c_int, first_new_tensor_index: *mut c_int) -> TfLiteStatus>Add tensors_to_add tensors, preserving pre-existing Tensor entries. If
non-null, the value pointed to by first_new_tensor_index will be set to
the index of the first new tensor.
GetNodeAndRegistration: Option<unsafe extern "C" fn(arg1: *mut TfLiteContext, node_index: c_int, node: *mut *mut TfLiteNode, registration: *mut *mut TfLiteRegistration) -> TfLiteStatus>Get a Tensor node by node_index.
WARNING: This is an experimental interface that is subject to change.
ReplaceNodeSubsetsWithDelegateKernels: Option<unsafe extern "C" fn(arg1: *mut TfLiteContext, registration: TfLiteRegistration, nodes_to_replace: *const TfLiteIntArray, delegate: *mut TfLiteDelegate) -> TfLiteStatus>Replace ops with one or more stub delegate operations. This function
does not take ownership of nodes_to_replace.
recommended_num_threads: c_intNumber of threads that are recommended to subsystems like gemmlowp and eigen.
GetExternalContext: Option<unsafe extern "C" fn(arg1: *mut TfLiteContext, arg2: TfLiteExternalContextType) -> *mut TfLiteExternalContext>Access external contexts by type.
WARNING: This is an experimental interface that is subject to change.
SetExternalContext: Option<unsafe extern "C" fn(arg1: *mut TfLiteContext, arg2: TfLiteExternalContextType, arg3: *mut TfLiteExternalContext)>Set the value of a external context. Does not take ownership of the pointer.
WARNING: This is an experimental interface that is subject to change.
allow_fp32_relax_to_fp16: boolFlag for allowing float16 precision for FP32 calculation. default: false.
WARNING: This is an experimental API and subject to change.
profiler: *mut c_voidPointer to the op-level profiler, if set; nullptr otherwise.
AllocatePersistentBuffer: Option<unsafe extern "C" fn(ctx: *mut TfLiteContext, bytes: usize) -> *mut c_void>Allocate persistent buffer which has the same life time as the
interpreter. Returns nullptr on failure. The memory is allocated from
heap for TFL, and from tail in TFLM. This method is only available in
Init or Prepare stage.
WARNING: This is an experimental interface that is subject to change.
AllocateBufferForEval: Option<unsafe extern "C" fn(ctx: *mut TfLiteContext, bytes: usize, ptr: *mut *mut c_void) -> TfLiteStatus>Allocate a buffer which will be deallocated right after invoke phase. The memory is allocated from heap in TFL, and from volatile arena in TFLM. This method is only available in invoke stage.
NOTE: If possible use RequestScratchBufferInArena method to avoid memory
allocation during inference time.
WARNING: This is an experimental interface that is subject to change.
RequestScratchBufferInArena: Option<unsafe extern "C" fn(ctx: *mut TfLiteContext, bytes: usize, buffer_idx: *mut c_int) -> TfLiteStatus>Request a scratch buffer in the arena through static memory planning.
This method is only available in Prepare stage and the buffer is
allocated by the interpreter between Prepare and Eval stage. In Eval
stage, GetScratchBuffer API can be used to fetch the address.
WARNING: This is an experimental interface that is subject to change.
GetScratchBuffer: Option<unsafe extern "C" fn(ctx: *mut TfLiteContext, buffer_idx: c_int) -> *mut c_void>Get the scratch buffer pointer. This method is only available in Eval stage.
WARNING: This is an experimental interface that is subject to change.
ResizeTensorExplicit: Option<unsafe extern "C" fn(ctx: *mut TfLiteContext, tensor: *mut TfLiteTensor, dims: c_int, shape: *const c_int) -> TfLiteStatus>Resize the memory pointer of the tensor. This method behaves the same as
ResizeTensor, except that it makes a copy of the shape array internally
so the shape array could be deallocated right afterwards.
WARNING: This is an experimental interface that is subject to change.
PreviewDelegatePartitioning: Option<unsafe extern "C" fn(context: *mut TfLiteContext, nodes_to_replace: *const TfLiteIntArray, partition_params_array: *mut *mut TfLiteDelegateParams, num_partitions: *mut c_int) -> TfLiteStatus>This method provides a preview of post-delegation partitioning. Each TfLiteDelegateParams in the referenced array corresponds to one instance of the delegate kernel. Example usage:
TfLiteIntArray* nodes_to_replace = ...;
TfLiteDelegateParams* params_array;
int num_partitions = 0;
TF_LITE_ENSURE_STATUS(context->PreviewDelegatePartitioning(
context, delegate, nodes_to_replace, ¶ms_array,
&num_partitions));
for (int idx = 0; idx < num_partitions; idx++) {
const auto& partition_params = params_array[idx];
...
}NOTE: The context owns the memory referenced by partition_params_array. It will be cleared with another call to PreviewDelegatePartitioning, or after TfLiteDelegateParams::Prepare returns.
WARNING: This is an experimental interface that is subject to change.
GetTensor: Option<unsafe extern "C" fn(context: *const TfLiteContext, tensor_idx: c_int) -> *mut TfLiteTensor>Returns a TfLiteTensor struct for a given index.
WARNING: This is an experimental interface that is subject to change.
WARNING: This method may not be available on all platforms.
GetEvalTensor: Option<unsafe extern "C" fn(context: *const TfLiteContext, tensor_idx: c_int) -> *mut TfLiteEvalTensor>Returns a TfLiteEvalTensor struct for a given index.
WARNING: This is an experimental interface that is subject to change.
WARNING: This method may not be available on all platforms.
GetModelMetadata: Option<unsafe extern "C" fn(context: *const TfLiteContext, name: *const c_char, ptr: *mut *const c_char, bytes: *mut usize) -> TfLiteStatus>Retrieves named metadata buffer from the TFLite model.
Returns kTfLiteOk if metadata is successfully obtained from the flatbuffer
Model: that is, there exists a metadata entry with given name string.
(see TFLite’s schema.fbs).
The corresponding buffer information is populated in ptr & bytes.
The data from ptr is valid for the lifetime of the Interpreter.
WARNING: This is an experimental interface that is subject to change.
AcquireSubgraphContext: Option<unsafe extern "C" fn(context: *mut TfLiteContext, subgraph_index: c_int, acquired_context: *mut *mut TfLiteContext) -> TfLiteStatus>Retrieves the corresponding TfLiteContext of a subgraph that the given subgraph_index points to and switches to the delegate context for that subgraph. If an invalid subgraph index is given, returns kTfLiteError.
NOTE: This function is expected to be paired with ReleaseSubgraphContext() once the delegate preparation is done and/or the delegate context functions are no longer needed.
WARNING: This is an experimental interface that is subject to change.
ReleaseSubgraphContext: Option<unsafe extern "C" fn(context: *mut TfLiteContext, subgraph_index: c_int) -> TfLiteStatus>Releases the subgraph context by switching back to the TFLite kernel context for the subgraph that the given subgraph_index points to.
NOTE: This function is expected to be used after AcquireSubgraphContext() once the delegate preparation is done and/or the delegate context functions are no longer needed.
WARNING: This is an experimental interface that is subject to change.
Trait Implementations§
Source§impl Clone for TfLiteContext
impl Clone for TfLiteContext
Source§fn clone(&self) -> TfLiteContext
fn clone(&self) -> TfLiteContext
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more